Decoupled Contrastive Learning for Long-Tailed Recognition (2403.06151v1)
Abstract: Supervised Contrastive Loss (SCL) is popular in visual representation learning. Given an anchor image, SCL pulls two types of positive samples, i.e., its augmentation and other images from the same class together, while pushes negative images apart to optimize the learned embedding. In the scenario of long-tailed recognition, where the number of samples in each class is imbalanced, treating two types of positive samples equally leads to the biased optimization for intra-category distance. In addition, similarity relationship among negative samples, that are ignored by SCL, also presents meaningful semantic cues. To improve the performance on long-tailed recognition, this paper addresses those two issues of SCL by decoupling the training objective. Specifically, it decouples two types of positives in SCL and optimizes their relations toward different objectives to alleviate the influence of the imbalanced dataset. We further propose a patch-based self distillation to transfer knowledge from head to tail classes to relieve the under-representation of tail classes. It uses patch-based features to mine shared visual patterns among different instances and leverages a self distillation procedure to transfer such knowledge. Experiments on different long-tailed classification benchmarks demonstrate the superiority of our method. For instance, it achieves the 57.7% top-1 accuracy on the ImageNet-LT dataset. Combined with the ensemble-based method, the performance can be further boosted to 59.7%, which substantially outperforms many recent works. The code is available at https://github.com/SY-Xuan/DSCL.
- What is the effect of importance weighting in deep learning? In ICML, 872β881. PMLR.
- Unsupervised learning of visual features by contrasting cluster assignments. NeurIPS, 33: 9912β9924.
- Improved baselines with momentum contrastive learning. arXiv:2003.04297.
- Randaugment: Practical automated data augmentation with a reduced search space. In CVPRW, 702β703.
- Parametric contrastive learning. In ICCV, 715β724.
- Momentum contrast for unsupervised visual representation learning. In CVPR, 9729β9738.
- Mask r-cnn. In ICCV, 2961β2969.
- Deep residual learning for image recognition. In CVPR, 770β778.
- BatchFormer: Learning to Explore Sample Relationships for Robust Representation Learning. arXiv:2203.01522.
- The class imbalance problem: A systematic study. Intelligent data analysis, 6(5): 429β449.
- Exploring balanced feature spaces for representation learning. In ICLR.
- Decoupling representation and classifier for long-tailed recognition. arXiv:1910.09217.
- Supervised contrastive learning. arXiv:2004.11362.
- Nested Collaborative Learning for Long-Tailed Visual Recognition. In CVPR, 6949β6958.
- Targeted Supervised Contrastive Learning for Long-Tailed Recognition. arXiv:2111.13998.
- Large-scale long-tailed recognition in an open world. In CVPR, 2537β2546.
- Fully convolutional networks for semantic segmentation. In CVPR, 3431β3440.
- Curvature-Balanced Feature Manifold Learning for Long-Tailed Classification. In CVPR.
- Auto-reid: Searching for a part-aware convnet for person re-identification. In ICCV, 3750β3759.
- Balanced meta-softmax for long-tailed visual recognition. NeurIPS, 33: 4175β4186.
- Imagenet large scale visual recognition challenge. IJCV, 115(3): 211β252.
- Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline). In ECCV, 480β496.
- The inaturalist species classification and detection dataset. In CVPR, 8769β8778.
- Attention is all you need. NeurIPS, 30.
- Long-tailed recognition by routing diverse distribution-aware experts. arXiv:2010.01809.
- Can Semantic Labels Assist Self-Supervised Visual Representation Learning? arXiv:2011.08621.
- Aggregated residual transformations for deep neural networks. In CVPR, 1492β1500.
- Patch-level representation learning for self-supervised vision transformers. In CVPR, 8354β8363.
- Part-based R-CNNs for fine-grained category detection. In ECCV, 834β849. Springer.
- Distribution alignment: A unified framework for long-tail visual recognition. In CVPR, 2361β2370.
- Patch-level Contrastive Learning via Positional Query for Visual Pre-training. In ICML, 41990β41999. PMLR.
- Self-supervised aggregation of diverse experts for test-agnostic long-tailed recognition. NeurIPS, 35: 34077β34090.
- Places: A 10 million image database for scene recognition. TPAMI, 40(6): 1452β1464.
- Balanced contrastive learning for long-tailed visual recognition. In CVPR, 6908β6917.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.