8000 Improved WGAN memory leak and training issue · Issue #4 · DL-IT/generative_zoo · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Improved WGAN memory leak and training issue #4

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
vishwakftw opened this issue Aug 27, 2017 · 1 comment
Open

Improved WGAN memory leak and training issue #4

vishwakftw opened this issue Aug 27, 2017 · 1 comment
Assignees

Comments

@vishwakftw
Copy link
Member
vishwakftw commented Aug 27, 2017

There seems to be a memory leak in the improved WGAN code. This is due to the BatchNorm2d layers in the Discriminator and Generator of the DCGAN, which is used for the improved WGAN and the torch.autograd.grad function. This is also referenced in this issue: PyTorch Double Backward on BatchNorm2d.

The patch is ready, but we might have to wait until the next release.

There seems to be problem with training without the BatchNorm2d layers. This needs to be tracked and fixed.

@vishwakftw vishwakftw added the bug label Aug 27, 2017
@vishwakftw vishwakftw self-assigned this Aug 27, 2017
@vishwakftw
Copy link
Member Author

The problem now is that the training is extremely unstable. The D-loss and G-loss are decreasing constantly and don't seem to flatten out/saturate.

Sign up for free t 4815 o join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant
0