A Personalized Video-Based Hand Taxonomy: Application for Individuals with Spinal Cord Injury (2403.18094v1)
Abstract: Hand function is critical for our interactions and quality of life. Spinal cord injuries (SCI) can impair hand function, reducing independence. A comprehensive evaluation of function in home and community settings requires a hand grasp taxonomy for individuals with impaired hand function. Developing such a taxonomy is challenging due to unrepresented grasp types in standard taxonomies, uneven data distribution across injury levels, and limited data. This study aims to automatically identify the dominant distinct hand grasps in egocentric video using semantic clustering. Egocentric video recordings collected in the homes of 19 individual with cervical SCI were used to cluster grasping actions with semantic significance. A deep learning model integrating posture and appearance data was employed to create a personalized hand taxonomy. Quantitative analysis reveals a cluster purity of 67.6% +- 24.2% with with 18.0% +- 21.8% redundancy. Qualitative assessment revealed meaningful clusters in video content. This methodology provides a flexible and effective strategy to analyze hand function in the wild. It offers researchers and clinicians an efficient tool for evaluating hand function, aiding sensitive assessments and tailored intervention plans.
- Kalsi-Ryan, S., Beaton, D., Curt, A., Duff, S., Popovic, M.R., Rudhe, C., Fehlings, M.G., Verrier, M.C.: The graded redefined assessment of strength sensibility and prehension: reliability and validity. Journal of neurotrauma 29(5), 905–914 (2012) Lang et al. [2023] Lang, C.E., Holleran, C.L., Strube, M.J., Ellis, T.D., Newman, C.A., Fahey, M., DeAngelis, T.R., Nordahl, T.J., Reisman, D.S., Earhart, G.M., et al.: Improvement in the capacity for activity versus improvement in performance of activity in daily life during outpatient rehabilitation. Journal of Neurologic Physical Therapy 47(1), 16 (2023) Cini et al. [2019] Cini, F., Ortenzi, V., Corke, P., Controzzi, M.: On the choice of grasp type and location when handing over an object. Science Robotics 4(27), 9757 (2019) Dousty et al. [2023] Dousty, M., Bandini, A., Eftekhar, P., Fleet, D.J., Zariffa, J.: Grasp analysis in the home environment as a measure of hand function after cervical spinal cord injury. Neurorehabilitation and Neural Repair, 15459683231177601 (2023) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Tenodesis grasp detection in egocentric video. IEEE Journal of Biomedical and Health Informatics 25(5), 1463–1470 (2020) Dousty et al. [2024] Dousty, M., Fleet, D.J., Zariffa, J.: Hand grasp classification in egocentric video after cervical spinal cord injury. IEEE Journal of Biomedical and Health Informatics 28(2), 645–654 (2024) https://doi.org/10.1109/JBHI.2023.3269692 Bandini et al. [2022] Bandini, A., Dousty, M., Hitzig, S.L., Craven, B.C., Kalsi-Ryan, S., Zariffa, J.: Measuring hand use in the home after cervical spinal cord injury using egocentric video. Journal of neurotrauma 39(23-24), 1697–1707 (2022) Feix et al. [2015] Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lang, C.E., Holleran, C.L., Strube, M.J., Ellis, T.D., Newman, C.A., Fahey, M., DeAngelis, T.R., Nordahl, T.J., Reisman, D.S., Earhart, G.M., et al.: Improvement in the capacity for activity versus improvement in performance of activity in daily life during outpatient rehabilitation. Journal of Neurologic Physical Therapy 47(1), 16 (2023) Cini et al. [2019] Cini, F., Ortenzi, V., Corke, P., Controzzi, M.: On the choice of grasp type and location when handing over an object. Science Robotics 4(27), 9757 (2019) Dousty et al. [2023] Dousty, M., Bandini, A., Eftekhar, P., Fleet, D.J., Zariffa, J.: Grasp analysis in the home environment as a measure of hand function after cervical spinal cord injury. Neurorehabilitation and Neural Repair, 15459683231177601 (2023) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Tenodesis grasp detection in egocentric video. IEEE Journal of Biomedical and Health Informatics 25(5), 1463–1470 (2020) Dousty et al. [2024] Dousty, M., Fleet, D.J., Zariffa, J.: Hand grasp classification in egocentric video after cervical spinal cord injury. IEEE Journal of Biomedical and Health Informatics 28(2), 645–654 (2024) https://doi.org/10.1109/JBHI.2023.3269692 Bandini et al. [2022] Bandini, A., Dousty, M., Hitzig, S.L., Craven, B.C., Kalsi-Ryan, S., Zariffa, J.: Measuring hand use in the home after cervical spinal cord injury using egocentric video. Journal of neurotrauma 39(23-24), 1697–1707 (2022) Feix et al. [2015] Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cini, F., Ortenzi, V., Corke, P., Controzzi, M.: On the choice of grasp type and location when handing over an object. Science Robotics 4(27), 9757 (2019) Dousty et al. [2023] Dousty, M., Bandini, A., Eftekhar, P., Fleet, D.J., Zariffa, J.: Grasp analysis in the home environment as a measure of hand function after cervical spinal cord injury. Neurorehabilitation and Neural Repair, 15459683231177601 (2023) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Tenodesis grasp detection in egocentric video. IEEE Journal of Biomedical and Health Informatics 25(5), 1463–1470 (2020) Dousty et al. [2024] Dousty, M., Fleet, D.J., Zariffa, J.: Hand grasp classification in egocentric video after cervical spinal cord injury. IEEE Journal of Biomedical and Health Informatics 28(2), 645–654 (2024) https://doi.org/10.1109/JBHI.2023.3269692 Bandini et al. [2022] Bandini, A., Dousty, M., Hitzig, S.L., Craven, B.C., Kalsi-Ryan, S., Zariffa, J.: Measuring hand use in the home after cervical spinal cord injury using egocentric video. Journal of neurotrauma 39(23-24), 1697–1707 (2022) Feix et al. [2015] Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Dousty, M., Bandini, A., Eftekhar, P., Fleet, D.J., Zariffa, J.: Grasp analysis in the home environment as a measure of hand function after cervical spinal cord injury. Neurorehabilitation and Neural Repair, 15459683231177601 (2023) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Tenodesis grasp detection in egocentric video. IEEE Journal of Biomedical and Health Informatics 25(5), 1463–1470 (2020) Dousty et al. [2024] Dousty, M., Fleet, D.J., Zariffa, J.: Hand grasp classification in egocentric video after cervical spinal cord injury. IEEE Journal of Biomedical and Health Informatics 28(2), 645–654 (2024) https://doi.org/10.1109/JBHI.2023.3269692 Bandini et al. [2022] Bandini, A., Dousty, M., Hitzig, S.L., Craven, B.C., Kalsi-Ryan, S., Zariffa, J.: Measuring hand use in the home after cervical spinal cord injury using egocentric video. Journal of neurotrauma 39(23-24), 1697–1707 (2022) Feix et al. [2015] Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Dousty, M., Zariffa, J.: Tenodesis grasp detection in egocentric video. IEEE Journal of Biomedical and Health Informatics 25(5), 1463–1470 (2020) Dousty et al. [2024] Dousty, M., Fleet, D.J., Zariffa, J.: Hand grasp classification in egocentric video after cervical spinal cord injury. IEEE Journal of Biomedical and Health Informatics 28(2), 645–654 (2024) https://doi.org/10.1109/JBHI.2023.3269692 Bandini et al. [2022] Bandini, A., Dousty, M., Hitzig, S.L., Craven, B.C., Kalsi-Ryan, S., Zariffa, J.: Measuring hand use in the home after cervical spinal cord injury using egocentric video. Journal of neurotrauma 39(23-24), 1697–1707 (2022) Feix et al. [2015] Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Dousty, M., Fleet, D.J., Zariffa, J.: Hand grasp classification in egocentric video after cervical spinal cord injury. IEEE Journal of Biomedical and Health Informatics 28(2), 645–654 (2024) https://doi.org/10.1109/JBHI.2023.3269692 Bandini et al. [2022] Bandini, A., Dousty, M., Hitzig, S.L., Craven, B.C., Kalsi-Ryan, S., Zariffa, J.: Measuring hand use in the home after cervical spinal cord injury using egocentric video. Journal of neurotrauma 39(23-24), 1697–1707 (2022) Feix et al. [2015] Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Bandini, A., Dousty, M., Hitzig, S.L., Craven, B.C., Kalsi-Ryan, S., Zariffa, J.: Measuring hand use in the home after cervical spinal cord injury using egocentric video. Journal of neurotrauma 39(23-24), 1697–1707 (2022) Feix et al. [2015] Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Lang, C.E., Holleran, C.L., Strube, M.J., Ellis, T.D., Newman, C.A., Fahey, M., DeAngelis, T.R., Nordahl, T.J., Reisman, D.S., Earhart, G.M., et al.: Improvement in the capacity for activity versus improvement in performance of activity in daily life during outpatient rehabilitation. Journal of Neurologic Physical Therapy 47(1), 16 (2023) Cini et al. [2019] Cini, F., Ortenzi, V., Corke, P., Controzzi, M.: On the choice of grasp type and location when handing over an object. Science Robotics 4(27), 9757 (2019) Dousty et al. [2023] Dousty, M., Bandini, A., Eftekhar, P., Fleet, D.J., Zariffa, J.: Grasp analysis in the home environment as a measure of hand function after cervical spinal cord injury. Neurorehabilitation and Neural Repair, 15459683231177601 (2023) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Tenodesis grasp detection in egocentric video. IEEE Journal of Biomedical and Health Informatics 25(5), 1463–1470 (2020) Dousty et al. [2024] Dousty, M., Fleet, D.J., Zariffa, J.: Hand grasp classification in egocentric video after cervical spinal cord injury. IEEE Journal of Biomedical and Health Informatics 28(2), 645–654 (2024) https://doi.org/10.1109/JBHI.2023.3269692 Bandini et al. [2022] Bandini, A., Dousty, M., Hitzig, S.L., Craven, B.C., Kalsi-Ryan, S., Zariffa, J.: Measuring hand use in the home after cervical spinal cord injury using egocentric video. Journal of neurotrauma 39(23-24), 1697–1707 (2022) Feix et al. [2015] Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cini, F., Ortenzi, V., Corke, P., Controzzi, M.: On the choice of grasp type and location when handing over an object. Science Robotics 4(27), 9757 (2019) Dousty et al. [2023] Dousty, M., Bandini, A., Eftekhar, P., Fleet, D.J., Zariffa, J.: Grasp analysis in the home environment as a measure of hand function after cervical spinal cord injury. Neurorehabilitation and Neural Repair, 15459683231177601 (2023) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Tenodesis grasp detection in egocentric video. IEEE Journal of Biomedical and Health Informatics 25(5), 1463–1470 (2020) Dousty et al. [2024] Dousty, M., Fleet, D.J., Zariffa, J.: Hand grasp classification in egocentric video after cervical spinal cord injury. IEEE Journal of Biomedical and Health Informatics 28(2), 645–654 (2024) https://doi.org/10.1109/JBHI.2023.3269692 Bandini et al. [2022] Bandini, A., Dousty, M., Hitzig, S.L., Craven, B.C., Kalsi-Ryan, S., Zariffa, J.: Measuring hand use in the home after cervical spinal cord injury using egocentric video. Journal of neurotrauma 39(23-24), 1697–1707 (2022) Feix et al. [2015] Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Dousty, M., Bandini, A., Eftekhar, P., Fleet, D.J., Zariffa, J.: Grasp analysis in the home environment as a measure of hand function after cervical spinal cord injury. Neurorehabilitation and Neural Repair, 15459683231177601 (2023) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Tenodesis grasp detection in egocentric video. IEEE Journal of Biomedical and Health Informatics 25(5), 1463–1470 (2020) Dousty et al. [2024] Dousty, M., Fleet, D.J., Zariffa, J.: Hand grasp classification in egocentric video after cervical spinal cord injury. IEEE Journal of Biomedical and Health Informatics 28(2), 645–654 (2024) https://doi.org/10.1109/JBHI.2023.3269692 Bandini et al. [2022] Bandini, A., Dousty, M., Hitzig, S.L., Craven, B.C., Kalsi-Ryan, S., Zariffa, J.: Measuring hand use in the home after cervical spinal cord injury using egocentric video. Journal of neurotrauma 39(23-24), 1697–1707 (2022) Feix et al. [2015] Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Dousty, M., Zariffa, J.: Tenodesis grasp detection in egocentric video. IEEE Journal of Biomedical and Health Informatics 25(5), 1463–1470 (2020) Dousty et al. [2024] Dousty, M., Fleet, D.J., Zariffa, J.: Hand grasp classification in egocentric video after cervical spinal cord injury. IEEE Journal of Biomedical and Health Informatics 28(2), 645–654 (2024) https://doi.org/10.1109/JBHI.2023.3269692 Bandini et al. [2022] Bandini, A., Dousty, M., Hitzig, S.L., Craven, B.C., Kalsi-Ryan, S., Zariffa, J.: Measuring hand use in the home after cervical spinal cord injury using egocentric video. Journal of neurotrauma 39(23-24), 1697–1707 (2022) Feix et al. [2015] Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Dousty, M., Fleet, D.J., Zariffa, J.: Hand grasp classification in egocentric video after cervical spinal cord injury. IEEE Journal of Biomedical and Health Informatics 28(2), 645–654 (2024) https://doi.org/10.1109/JBHI.2023.3269692 Bandini et al. [2022] Bandini, A., Dousty, M., Hitzig, S.L., Craven, B.C., Kalsi-Ryan, S., Zariffa, J.: Measuring hand use in the home after cervical spinal cord injury using egocentric video. Journal of neurotrauma 39(23-24), 1697–1707 (2022) Feix et al. [2015] Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Bandini, A., Dousty, M., Hitzig, S.L., Craven, B.C., Kalsi-Ryan, S., Zariffa, J.: Measuring hand use in the home after cervical spinal cord injury using egocentric video. Journal of neurotrauma 39(23-24), 1697–1707 (2022) Feix et al. [2015] Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Cini, F., Ortenzi, V., Corke, P., Controzzi, M.: On the choice of grasp type and location when handing over an object. Science Robotics 4(27), 9757 (2019) Dousty et al. [2023] Dousty, M., Bandini, A., Eftekhar, P., Fleet, D.J., Zariffa, J.: Grasp analysis in the home environment as a measure of hand function after cervical spinal cord injury. Neurorehabilitation and Neural Repair, 15459683231177601 (2023) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Tenodesis grasp detection in egocentric video. IEEE Journal of Biomedical and Health Informatics 25(5), 1463–1470 (2020) Dousty et al. [2024] Dousty, M., Fleet, D.J., Zariffa, J.: Hand grasp classification in egocentric video after cervical spinal cord injury. IEEE Journal of Biomedical and Health Informatics 28(2), 645–654 (2024) https://doi.org/10.1109/JBHI.2023.3269692 Bandini et al. [2022] Bandini, A., Dousty, M., Hitzig, S.L., Craven, B.C., Kalsi-Ryan, S., Zariffa, J.: Measuring hand use in the home after cervical spinal cord injury using egocentric video. Journal of neurotrauma 39(23-24), 1697–1707 (2022) Feix et al. [2015] Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Dousty, M., Bandini, A., Eftekhar, P., Fleet, D.J., Zariffa, J.: Grasp analysis in the home environment as a measure of hand function after cervical spinal cord injury. Neurorehabilitation and Neural Repair, 15459683231177601 (2023) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Tenodesis grasp detection in egocentric video. IEEE Journal of Biomedical and Health Informatics 25(5), 1463–1470 (2020) Dousty et al. [2024] Dousty, M., Fleet, D.J., Zariffa, J.: Hand grasp classification in egocentric video after cervical spinal cord injury. IEEE Journal of Biomedical and Health Informatics 28(2), 645–654 (2024) https://doi.org/10.1109/JBHI.2023.3269692 Bandini et al. [2022] Bandini, A., Dousty, M., Hitzig, S.L., Craven, B.C., Kalsi-Ryan, S., Zariffa, J.: Measuring hand use in the home after cervical spinal cord injury using egocentric video. Journal of neurotrauma 39(23-24), 1697–1707 (2022) Feix et al. [2015] Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Dousty, M., Zariffa, J.: Tenodesis grasp detection in egocentric video. IEEE Journal of Biomedical and Health Informatics 25(5), 1463–1470 (2020) Dousty et al. [2024] Dousty, M., Fleet, D.J., Zariffa, J.: Hand grasp classification in egocentric video after cervical spinal cord injury. IEEE Journal of Biomedical and Health Informatics 28(2), 645–654 (2024) https://doi.org/10.1109/JBHI.2023.3269692 Bandini et al. [2022] Bandini, A., Dousty, M., Hitzig, S.L., Craven, B.C., Kalsi-Ryan, S., Zariffa, J.: Measuring hand use in the home after cervical spinal cord injury using egocentric video. Journal of neurotrauma 39(23-24), 1697–1707 (2022) Feix et al. [2015] Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Dousty, M., Fleet, D.J., Zariffa, J.: Hand grasp classification in egocentric video after cervical spinal cord injury. IEEE Journal of Biomedical and Health Informatics 28(2), 645–654 (2024) https://doi.org/10.1109/JBHI.2023.3269692 Bandini et al. [2022] Bandini, A., Dousty, M., Hitzig, S.L., Craven, B.C., Kalsi-Ryan, S., Zariffa, J.: Measuring hand use in the home after cervical spinal cord injury using egocentric video. Journal of neurotrauma 39(23-24), 1697–1707 (2022) Feix et al. [2015] Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Bandini, A., Dousty, M., Hitzig, S.L., Craven, B.C., Kalsi-Ryan, S., Zariffa, J.: Measuring hand use in the home after cervical spinal cord injury using egocentric video. Journal of neurotrauma 39(23-24), 1697–1707 (2022) Feix et al. [2015] Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Dousty, M., Bandini, A., Eftekhar, P., Fleet, D.J., Zariffa, J.: Grasp analysis in the home environment as a measure of hand function after cervical spinal cord injury. Neurorehabilitation and Neural Repair, 15459683231177601 (2023) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Tenodesis grasp detection in egocentric video. IEEE Journal of Biomedical and Health Informatics 25(5), 1463–1470 (2020) Dousty et al. [2024] Dousty, M., Fleet, D.J., Zariffa, J.: Hand grasp classification in egocentric video after cervical spinal cord injury. IEEE Journal of Biomedical and Health Informatics 28(2), 645–654 (2024) https://doi.org/10.1109/JBHI.2023.3269692 Bandini et al. [2022] Bandini, A., Dousty, M., Hitzig, S.L., Craven, B.C., Kalsi-Ryan, S., Zariffa, J.: Measuring hand use in the home after cervical spinal cord injury using egocentric video. Journal of neurotrauma 39(23-24), 1697–1707 (2022) Feix et al. [2015] Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Dousty, M., Zariffa, J.: Tenodesis grasp detection in egocentric video. IEEE Journal of Biomedical and Health Informatics 25(5), 1463–1470 (2020) Dousty et al. [2024] Dousty, M., Fleet, D.J., Zariffa, J.: Hand grasp classification in egocentric video after cervical spinal cord injury. IEEE Journal of Biomedical and Health Informatics 28(2), 645–654 (2024) https://doi.org/10.1109/JBHI.2023.3269692 Bandini et al. [2022] Bandini, A., Dousty, M., Hitzig, S.L., Craven, B.C., Kalsi-Ryan, S., Zariffa, J.: Measuring hand use in the home after cervical spinal cord injury using egocentric video. Journal of neurotrauma 39(23-24), 1697–1707 (2022) Feix et al. [2015] Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Dousty, M., Fleet, D.J., Zariffa, J.: Hand grasp classification in egocentric video after cervical spinal cord injury. IEEE Journal of Biomedical and Health Informatics 28(2), 645–654 (2024) https://doi.org/10.1109/JBHI.2023.3269692 Bandini et al. [2022] Bandini, A., Dousty, M., Hitzig, S.L., Craven, B.C., Kalsi-Ryan, S., Zariffa, J.: Measuring hand use in the home after cervical spinal cord injury using egocentric video. Journal of neurotrauma 39(23-24), 1697–1707 (2022) Feix et al. [2015] Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Bandini, A., Dousty, M., Hitzig, S.L., Craven, B.C., Kalsi-Ryan, S., Zariffa, J.: Measuring hand use in the home after cervical spinal cord injury using egocentric video. Journal of neurotrauma 39(23-24), 1697–1707 (2022) Feix et al. [2015] Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Dousty, M., Zariffa, J.: Tenodesis grasp detection in egocentric video. IEEE Journal of Biomedical and Health Informatics 25(5), 1463–1470 (2020) Dousty et al. [2024] Dousty, M., Fleet, D.J., Zariffa, J.: Hand grasp classification in egocentric video after cervical spinal cord injury. IEEE Journal of Biomedical and Health Informatics 28(2), 645–654 (2024) https://doi.org/10.1109/JBHI.2023.3269692 Bandini et al. [2022] Bandini, A., Dousty, M., Hitzig, S.L., Craven, B.C., Kalsi-Ryan, S., Zariffa, J.: Measuring hand use in the home after cervical spinal cord injury using egocentric video. Journal of neurotrauma 39(23-24), 1697–1707 (2022) Feix et al. [2015] Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Dousty, M., Fleet, D.J., Zariffa, J.: Hand grasp classification in egocentric video after cervical spinal cord injury. IEEE Journal of Biomedical and Health Informatics 28(2), 645–654 (2024) https://doi.org/10.1109/JBHI.2023.3269692 Bandini et al. [2022] Bandini, A., Dousty, M., Hitzig, S.L., Craven, B.C., Kalsi-Ryan, S., Zariffa, J.: Measuring hand use in the home after cervical spinal cord injury using egocentric video. Journal of neurotrauma 39(23-24), 1697–1707 (2022) Feix et al. [2015] Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Bandini, A., Dousty, M., Hitzig, S.L., Craven, B.C., Kalsi-Ryan, S., Zariffa, J.: Measuring hand use in the home after cervical spinal cord injury using egocentric video. Journal of neurotrauma 39(23-24), 1697–1707 (2022) Feix et al. [2015] Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Dousty, M., Fleet, D.J., Zariffa, J.: Hand grasp classification in egocentric video after cervical spinal cord injury. IEEE Journal of Biomedical and Health Informatics 28(2), 645–654 (2024) https://doi.org/10.1109/JBHI.2023.3269692 Bandini et al. [2022] Bandini, A., Dousty, M., Hitzig, S.L., Craven, B.C., Kalsi-Ryan, S., Zariffa, J.: Measuring hand use in the home after cervical spinal cord injury using egocentric video. Journal of neurotrauma 39(23-24), 1697–1707 (2022) Feix et al. [2015] Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Bandini, A., Dousty, M., Hitzig, S.L., Craven, B.C., Kalsi-Ryan, S., Zariffa, J.: Measuring hand use in the home after cervical spinal cord injury using egocentric video. Journal of neurotrauma 39(23-24), 1697–1707 (2022) Feix et al. [2015] Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Bandini, A., Dousty, M., Hitzig, S.L., Craven, B.C., Kalsi-Ryan, S., Zariffa, J.: Measuring hand use in the home after cervical spinal cord injury using egocentric video. Journal of neurotrauma 39(23-24), 1697–1707 (2022) Feix et al. [2015] Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46(1), 66–77 (2015) Hermsdörfer et al. [2003] Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Hermsdörfer, J., Hagl, E., Nowak, D., Marquardt, C.: Grip force control during object manipulation in cerebral stroke. Clinical neurophysiology 114(5), 915–929 (2003) Bensmail et al. [2010] Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Bensmail, D., Robertson, J., Fermanian, C., Roby-Brami, A.: Botulinum toxin to treat upper-limb spasticity in hemiparetic patients: grasp strategies and kinematics of reach-to-grasp movements. Neurorehabilitation and neural repair 24(2), 141–151 (2010) Huang et al. [2015] Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Huang, D.-A., Ma, M., Ma, W.-C., Kitani, K.M.: How do we use our hands? discovering a diverse set of common grasps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 666–675 (2015) Dousty and Zariffa [2020] Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Dousty, M., Zariffa, J.: Towards clustering hand grasps of individuals with spinal cord injury in egocentric video. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2151–2154 (2020). IEEE Domingos [2012] Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Domingos, P.: A few useful things to know about machine learning. Communications of the ACM 55(10), 78–87 (2012) Aggarwal et al. [2001] Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434 (2001). Springer LeCun et al. [2015] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) Hu et al. [2017] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on Machine Learning, pp. 1558–1567 (2017). PMLR Guérin et al. [2017] Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Guérin, J., Gibaru, O., Thiery, S., Nyiri, E.: Cnn features are also great at unsupervised classification. arXiv preprint arXiv:1707.01700 (2017) Wang and Jiang [2020] Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10–23 (2020) Shiran and Weinshall [2021] Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Shiran, G., Weinshall, D.: Multi-modal deep clustering: Unsupervised partitioning of images. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4728–4735 (2021). IEEE Guérin et al. [2021] Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Guérin, J., Thiery, S., Nyiri, E., Gibaru, O., Boots, B.: Combining pretrained cnn feature extractors to enhance clustering of complex natural images. Neurocomputing 423, 551–571 (2021) Genevay et al. [2019] Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Genevay, A., Dulac-Arnold, G., Vert, J.-P.: Differentiable deep clustering with cluster size constraints. arXiv preprint arXiv:1910.09036 (2019) Li et al. [2018] Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Li, F., Qiao, H., Zhang, B.: Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition 83, 161–173 (2018) Gong et al. [2015] Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 19–27 (2015) Simon et al. [2017] Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1153 (2017) Lin et al. [2021] Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021) Shan et al. [2020] Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Shan, D., Geng, J., Shu, M., Fouhey, D.F.: Understanding human hands in contact at internet scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9869–9878 (2020) Visee et al. [2020] Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Visee, R.J., Likitlersuang, J., Zariffa, J.: An effective and efficient method for detecting hands in egocentric videos for rehabilitation applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(3), 748–755 (2020) Likitlersuang et al. [2019] Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Likitlersuang, J., Sumitro, E.R., Cao, T., Visée, R.J., Kalsi-Ryan, S., Zariffa, J.: Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of neuroengineering and rehabilitation 16(1), 1–11 (2019) Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM computing surveys (CSUR) 31(3), 264–323 (1999) Saxena et al. [2017] Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O.P., Tiwari, A., Er, M.J., Ding, W., Lin, C.-T.: A review of clustering techniques and developments. Neurocomputing 267, 664–681 (2017) Grill et al. [2020] Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems 33, 21271–21284 (2020) Arinik et al. [2021] Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Arinik, N., Labatut, V., Figueiredo, R.: Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access 9, 20255–20276 (2021) Raghu et al. [2021] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 34, 12116–12128 (2021) Cohen et al. [2020] Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Cohen, U., Chung, S., Lee, D.D., Sompolinsky, H.: Separability and geometry of object manifolds in deep neural networks. Nature communications 11(1), 746 (2020) Somepalli et al. [2022] Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Somepalli, G., Fowl, L., Bansal, A., Yeh-Chiang, P., Dar, Y., Baraniuk, R., Goldblum, M., Goldstein, T.: Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13699–13708 (2022) Sharif Razavian et al. [2014] Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) Koh et al. [2021] Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Koh, P.W., Sagawa, S., Marklund, H., Xie, S.M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R.L., Gao, I., et al.: Wilds: A benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664 (2021). PMLR Mickisch et al. [2020] Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: An empirical study. arXiv preprint arXiv:2002.01810 (2020) Deng et al. [2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). Ieee Boureau et al. [2010] Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111–118 (2010) Lee et al. [2016] Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Lee, C.-Y., Gallagher, P.W., Tu, Z.: Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In: Artificial Intelligence and Statistics, pp. 464–472 (2016). PMLR Likitlersuang and Zariffa [2018] Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Likitlersuang, J., Zariffa, J.: Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function. IEEE Journal of Biomedical and Health Informatics 22(2), 561–569 (2018) https://doi.org/10.1109/JBHI.2016.2636748 de Amorim [2016] Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Amorim, R.C.: A survey on feature weighting based k-means algorithms. Journal of Classification 33(2), 210–242 (2016) Ahmed et al. [2020] Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) Arthur and Vassilvitskii [2006] Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006) Fraley [1998] Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Fraley, C.: Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific Computing 20(1), 270–281 (1998) Khosla et al. [2020] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661–18673 (2020) Tian et al. [2020] Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Tian, Y., Yu, L., Chen, X., Ganguli, S.: Understanding self-supervised learning with dual deep networks. arXiv preprint arXiv:2010.00578 (2020) Cubuk et al. [2020] Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020) Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
- Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)