8000 Norm subgradient at 0 by albanD · Pull Request #2775 · pytorch/pytorch · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Norm subgradient at 0 #2775

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Sep 18, 2017
Merged

Norm subgradient at 0 #2775

merged 3 commits into from
Sep 18, 2017

Conversation

albanD
Copy link
Collaborator
@albanD albanD commented Sep 18, 2017

Fix #2421

Note that with this, at 0, it is now different to use .norm() or .pow(2).sum().sqrt() as the first one will return 0 and the second one NaN

output = output.unsqueeze(ctx.dim)
# Special case at 0 where we return a subgradient containing 0
zero_mask = (output == 0).type_as(grad_input)
grad_input.mul(1 - zero_mask)

This comment was marked as off-topic.

This comment was marked as off-topic.

@soumith soumith merged commit 2763bfc into pytorch:master Sep 18, 2017
@soumith
Copy link
Member
soumith commented Sep 18, 2017

thanks Alban!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Gradient of zero norm is nan
4 participants
0