Aditya Krishna Menon, Sadeep Jayasumana, Ankit Singh Rawat, Himanshu Jain, Andreas Veit, Sanjiv Kumar.
This is the unofficial implementation of DAR-BN in the paper Long-tail learning via logit adjustment(ICLR 2021) in pytorch.
The code is built with following libraries:
- PyTorch 1.2
- TensorboardX
- scikit-learn
- Imbalanced CIFAR. The original data will be downloaded and converted by imbalance_cifar.py.
We provide several training examples with this repo:
- To train the ERM baseline on long-tailed imbalance with ratio of 100
python cifar_train.py --gpu 0 --imb_type exp --imb_factor 0.01 --loss_type CE --train_rule None
- To train the ERM Loss along with DAR-BN on long-tailed imbalance with ratio of 100
python cifar_train.py --gpu 0 --imb_type exp --imb_factor 0.01 --loss_type CE --train_rule DAR-BN
Baseline (ERM) | Logit adjustment loss | |
---|---|---|
CIFAR-10 LT | 79.6 | 80.74 |
CIFAR-100 LT | 36.2 | 41.58 |