-
Notifications
You must be signed in to change notification settings - Fork 76
Adversarial loss #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails 8000 .
Already on GitHub? Sign in to your account
Comments
Hi, thanks for sharing the source code, too! But, I faced the same problem as @Yuliang-Zou said above. Thanks. |
I am trying to implement the same (SVHN->MNIST) in PyTorch and I cannot get the target to converge, perhaps because of same issue as what @BoChengTsai and @Yuliang-Zou mentioned. Any suggestions? |
@Yuliang-Zou afaik, it does not make any difference; if you expand the loss expression, these two should be equivalent it also took me some time to implement it in pytorch, here is a check list I wrote in the process, maybe it could be useful to some folks here
|
@MInner could you share your pytorch code for SVHN->MNIST ? I tried with these hyperparameters but am unable to replicate the paper's accuracies. |
Thanks for sharing the source code!
But I have some questions here:
In
adda/tools/train_adda.py
line88-103: seems that you set source domain label as 1, and target domain label as 0? But here source distribution is fixed, while target distribution is updated. According to the original setting in the GAN paper, the changing one should have the zero label. And the equation (9) in the ADDA paper also shows that the ground truth label of source should be 1.How do you separate your encoder and classifier in the base model? In the paper, it seems that the encoder only contains CNN. But, in the code seems that you take the whole network as a encoder, then where is the classifier?
Thanks for your time. Looking forward to your response.
The text was updated successfully, but these errors were encountered: