This repository provides a modification of the PyTorch implementation of StarGAN prepared to easily run the network with new datasets. The original repository can be found here.
For running the original experiment with the CelebA dataset:
NOTE: the default parameters on main.py are for running the CelebA experiment as in the original repository, except for --crop_size
and --image_resize
that have been set to None
as default.
python main.py --exp_name celeba --image_dir $PATH_TO_CELEBA --crop_size 178 --image_resize 128
Check celeba.py to perform a translation over a single dataset with different attributes.
For running a translation between MNIST and MNIST-M datasets:
python main.py --dataset mnist2mnistm --exp_name mnist2mnistm --image_dir $PATH_CONTAINING_BOTH_DATASETS --c_dim 2 --d_conv_dim 32 --g_conv_dim 32 --g_repeat_num 0 --d_repeat_num 4
Check mnist2mnistm.py to perform a translation over different datasets.
StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation
Yunjey Choi 1,2, Minje Choi 1,2, Munyoung Kim 2,3, Jung-Woo Ha 2, Sung Kim 2,4, and Jaegul Choo 1,2
1 Korea University, 2 Clova AI Research (NAVER Corp.), 3 The College of New Jersey, 4 HKUST
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018 (Oral)
- Python 3.5+
- PyTorch 0.4.0
- TensorFlow 1.3+ (optional for tensorboard)
If this work is useful for your research, please cite the paper:
@InProceedings{StarGAN2018,
author = {Choi, Yunjey and Choi, Minje and Kim, Munyoung and Ha, Jung-Woo and Kim, Sunghun and Choo, Jaegul},
title = {StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}