You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There seems to be a memory leak in the improved WGAN code. This is due to the BatchNorm2d layers in the Discriminator and Generator of the DCGAN, which is used for the improved WGAN and the torch.autograd.grad function. This is also referenced in this issue: PyTorch Double Backward on BatchNorm2d.
The patch is ready, but we might have to wait until the next release.
There seems to be problem with training without the BatchNorm2d layers. This needs to be tracked and fixed.
The text was updated successfully, but these errors were encountered:
Uh oh!
There was an error while loading. Please reload this page.
There seems to be a memory leak in the improved WGAN code. This is due to the
BatchNorm2d
layers in the Discriminator and Generator of the DCGAN, which is used for the improved WGAN and thetorch.autograd.grad
function. This is also referenced in this issue: PyTorch Double Backward on BatchNorm2d.The patch is ready, but we might have to wait until the next release.
There seems to be problem with training without the
BatchNorm2d
layers. This needs to be tracked and fixed.The text was updated successfully, but these errors were encountered: