The code of the following paper will be released in this repository:
Membership Inference on Text-to-image Diffusion Models via Conditional Likelihood Discrepancy (NeurIPS 2024).
-
Fine-tuning Target and Shadow Models
- Use
ft_mia.sh
script to fine-tune the target and shadow models on two different training sets (COCO_MIA_Finetuning Data).
- Use
-
Performing Membership Inference
- Utilize the following scripts to conduct membership inference on the fine-tuned models:
-
mia_Loss.py
-
mia_pfami.py
-
mia_SEC_PIA.py
-
mia_CLiD_impt.py
(ormia_CLiD_clip.py
): CLiD with the reduction methods of Importance clipping and Simply Clipping
-
- Utilize the following scripts to conduct membership inference on the fine-tuned models:
-
Evaluation
- Use the
cal_xx.py
functions to compute the metrics of the following methods:- Baselines:
cal_baselines.py
- Clidth:
cal_clid_th.py
- Clidvec:
cal_clid_xgb.py
- Baselines:
- Use the
-
Validation results with MS-COCO Dataset under real-world training settings:
- We additionally provide the intermediate results of the MS-COCO dataset in Sec. 4.2 for validation.
- These results are obtained under real-world training settings.
- To be added.
If you find this project useful in your research, please consider citing our paper:
@article{zhai2024membership,
title={Membership Inference on Text-to-Image Diffusion Models via Conditional Likelihood Discrepancy},
author={Zhai, Shengfang and Chen, Huanran and Dong, Yinpeng and Li, Jiajun and Shen, Qingni and Gao, Yansong and Su, Hang and Liu, Yang},
journal={arXiv preprint arXiv:2405.14800},
year={2024}
}