This repository contains the Pytorch implementation of the T-PAMI 2024 paper "Probabilistic Contrastive Learning for Long-Tailed Visual Recognition".
Probabilistic Contrastive Learning for Long-Tailed Visual Recognition
Chaoqun Du, Yulin Wang, Shiji Song, Gao Huang,
We proposed a novel probabilistic contrastive (ProCo) learning algorithm for long-tailed distribution. Specifically, we employed a reasonable and straight-forward von Mises-Fisher distribution to model the normalized feature space of samples in the context of contrastive learning. This choice offers two key advantages. First, it is efficient to estimate the distribution parameters across different batches by maximum likelihood estimation. Second, we derived a closed form of expected supervised contrastive loss for optimization by sampling infinity samples from the estimated distribution. This eliminates the inherent limitation of supervised contrastive learning that requires a large number of samples to achieve satisfactory performance.
Method | Dataset | Imbalance Factor | Epochs | Top-1 Acc.(%) | Model |
---|---|---|---|---|---|
ProCo | CIFAR100-LT | 100 | 200 | 52.8 | Tsinghua Cloud/Google Drive |
ProCo | CIFAR100-LT | 100 | 400 | 54.2 | Tsinghua Cloud/Google Drive |
ProCo | CIFAR100-LT | 50 | 200 | 57.1 | Tsinghua Cloud/Google Drive |
ProCo | CIFAR100-LT | 10 | 200 | 65.5 | Tsinghua Cloud/Google Drive |
ProCo | CIFAR10-LT | 100 | 200 | 85.9 | Tsinghua Cloud/Google Drive |
ProCo | CIFAR10-LT | 50 | 200 | 88.2 | Tsinghua Cloud/Google Drive |
ProCo | CIFAR10-LT | 10 | 200 | 91.9 | Tsinghua Cloud/Google Drive |
We also provide the tensorboard logs for the CIFAR experiments in the logs folder.
Method | Backbone | Dataset | Epochs | Top-1 Acc.(%) | Model |
---|---|---|---|---|---|
ProCo | ResNet-50 | ImageNet-LT | 90 | 57.3 | Tsinghua Cloud/Google Drive |
ProCo | ResNeXt-50 | ImageNet-LT | 90 | 58.0 | Tsinghua Cloud/Google Drive |
ProCo | ResNet-50 | iNaturalist 2018 | 90 | 73.5 | Tsinghua Cloud/Google Drive |
ProCo | ResNet-50 | ImageNet-LT | 180 | 57.8 | Tsinghua Cloud/Google Drive |
- python 3.9
- numpy 1.23.3
- Pillow 8.2.0
- Requests 2.25.1
- scipy 1.9.3
- tensorboardX 2.5.1
- torch 1.12.1
- torchvision 0.13.1
The above environment is recommended, but not necessary. You can also use other versions of the packages.
By default, we use 1 RTX3090 GPU for CIFAR, 4 RTX3090 GPUs for ImageNet training, and 8 A100 (40G) GPUs for iNaturalist2018 training. You can adjust the batch size according to your GPU memory.
bash sh/ProCo_CIFAR.sh ${dataset} ${imbalance_factor} ${epochs}
bash sh/ProCo_CIFAR.sh cifar100 0.01 200
bash sh/ProCo_CIFAR.sh cifar100 0.01 400
bash sh/ProCo_CIFAR.sh cifar100 0.02 200
bash sh/ProCo_CIFAR.sh cifar100 0.1 200
bash sh/ProCo_CIFAR.sh cifar10 0.01 200
bash sh/ProCo_CIFAR.sh cifar10 0.02 200
bash sh/ProCo_CIFAR.sh cifar10 0.1 200
bash sh/ProCo_ImageNetLT_R50_90epochs.sh
bash sh/ProCo_ImageNetLT_R50_180epochs.sh
bash sh/ProCo_ImageNetLT_X50_90epochs.sh
bash sh/ProCo_inat_R50_90epochs.sh
For evaluation, you can run the following command:
bash sh/ProCo_CIFAR.sh cifar100 0.01 200 ${checkpoint_path}
bash sh/ProCo_ImageNetLT_R50_90epochs.sh ${checkpoint_path}
- Long-tailed Semi-Supervised Learning.
If you find this code useful, please consider citing our paper:
@article{du2024probabilistic,
title={Probabilistic Contrastive Learning for Long-Tailed Visual Recognition},
author={Du, Chaoqun and Wang, Yulin and Song, Shiji and Huang, Gao},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
year={2024},
publisher={IEEE}
}
If you have any questions, please feel free to contact the authors. Chaoqun Du: [email protected].
Our code is based on the BCL (Balanced Contrastive Learning for Long-Tailed Visual Recognition) repository.