[ACM MM2024] Mitigate Catastrophic Remembering via Continual Knowledge Purification for Noisy Lifelong Person Re-Identification (CKP)
conda create -n IRL python=3.7
conda activate IRL
pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu117
pip install -r requirement.txt
Download the person re-identification datasets Market-1501, MSMT17, CUHK03, SenseReID. Other datasets can be prepared following Torchreid_Datasets_Doc and light-reid. Then unzip them and rename them under the directory like
PRID
├── CUHK01
│ └──..
├── CUHK02
│ └──..
├── CUHK03
│ └──..
├── CUHK-SYSU
│ └──..
├── DukeMTMC-reID
│ └──..
├── grid
│ └──..
├── i-LIDS_Pedestrain
│ └──..
├── MSMT17_V2
│ └──..
├── Market-1501
│ └──..
├── prid2011
│ └──..
├── SenseReID
│ └──..
└── viper
└──..
This repository includes noisy label configurations, located in the noisy_data/
folder. Along with the noisy ratios of 0.1, 0.2, and 0.3 used in the paper, we also provide additional ratios of 0.4, 0.5, 0.6, 0.7, and 0.8 to support further investigation for the Noisy LReID community. Note that the noisy data are generated following PNet.
Training on a single dataset (--noise random/pattern/clean, --noise_ratio 0.1/0.2/0.3/0.4/...):
`python continual_train_noisy.py --data-dir path/to/PRID --noise_ratio XX --noise YY`
(for example, `python continual_train_noisy.py --data-dir ../DATA/PRID --noise_ratio 0.3 --noise pattern`)
Reproduce the random/pattern noise results by running the bash file:
`sh release_random.sh`
`sh release_pattern.sh`
Evaluation from checkpoint:
`python continual_train_noisy.py --data-dir path/to/PRID --test_folder /path/to/pretrained/folder`
The following results were obtained with a NVIDIA 4090 GPU.
If you find this code useful for your research, please cite our paper. @inproceedings{xu2024mitigate, title={Mitigate Catastrophic Remembering via Continual Knowledge Purification for Noisy Lifelong Person Re-Identification}, author={Xu, Kunlun and Zhang, Haozhuo and Li, Yu and Peng, Yuxin and Zhou, Jiahuan}, booktitle={ACM Multimedia 2024} }
Our LReID code is based on LSTKC and CORE.
For any questions, feel free to contact us (xkl@stu.pku.edu.cn).
Welcome to our Laboratory Homepage (OV3 Lab) for more information about our papers, source codes, and datasets.