Tags: gloryVine/pytorch
Tags
Perform appropriate CUDA stream synchronization in distributed autogr… …ad. (pytorch#53929) (pytorch#54358) Summary: Pull Request resolved: pytorch#53929 The local autograd engine performs appropriate stream synchronization between autograd nodes in the graph to ensure a consumer's stream is synchronized with the producer's stream before executing the consumer. However in case of distributed autograd, the SendRpcBackward function receives gradients over the wire and TensorPipe uses its own pool of streams for this purpose. As a result, the tensors are received on TensorPipe's stream pool but SendRpcBackward runs on a different stream during the backward pass and there is no logic to synchronize these streams. To fix this, I've enhanced DistEngine to synchronize these streams appropriately when it receives grads over the wire. ghstack-source-id: 124055277 (Note: this ignores all push blocking failures!) Test Plan: 1) Added unit test which reproduced the issue. 2) waitforbuildbot. Reviewed By: walterddr, wanchaol Differential Revision: D27025307 fbshipit-source-id: 2944854e688e001cb3989d2741727b30d9278414 Co-authored-by: Pritam Damania <pritam.damania@fb.com>
Perform appropriate CUDA stream synchronization in distributed autogr… …ad. (pytorch#53929) (pytorch#54358) Summary: Pull Request resolved: pytorch#53929 The local autograd engine performs appropriate stream synchronization between autograd nodes in the graph to ensure a consumer's stream is synchronized with the producer's stream before executing the consumer. However in case of distributed autograd, the SendRpcBackward function receives gradients over the wire and TensorPipe uses its own pool of streams for this purpose. As a result, the tensors are received on TensorPipe's stream pool but SendRpcBackward runs on a different stream during the backward pass and there is no logic to synchronize these streams. To fix this, I've enhanced DistEngine to synchronize these streams appropriately when it receives grads over the wire. ghstack-source-id: 124055277 (Note: this ignores all push blocking failures!) Test Plan: 1) Added unit test which reproduced the issue. 2) waitforbuildbot. Reviewed By: walterddr, wanchaol Differential Revision: D27025307 fbshipit-source-id: 2944854e688e001cb3989d2741727b30d9278414 Co-authored-by: Pritam Damania <pritam.damania@fb.com>
third_party: Update kineto to fix libtorch builds (pytorch#54205) Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
third_party: Update kineto to fix libtorch builds (pytorch#54205) Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
Fix hipify_python (pytorch#52756) Co-authored-by: rraminen <rraminen@amd.com> Co-authored-by: Nikita Shulga <nshulga@fb.com>
Fix hipify_python (pytorch#52756) Co-authored-by: rraminen <rraminen@amd.com> Co-authored-by: Nikita Shulga <nshulga@fb.com>
[1.8] Fix onnx mixed precision export for layernorm & fuseLogSoftmaxN… …llLoss (pytorch#52510) Co-authored-by: Shubham Bhokare <32080845+shubhambhokare1@users.noreply.github.com>
[v1.8 patch] [Resubmission] Add a documentation page for DDP communic… …ation hooks (pytorch#52215) Co-authored-by: wayi <wayi@devgpu238.prn2.facebook.com>
[FX][docs] Indent forward (pytorch#51802) Summary: Pull Request resolved: pytorch#51802 lol Test Plan: Imported from OSS Reviewed By: zou3519 Differential Revision: D26284311 Pulled By: jamesr66a fbshipit-source-id: 0d303d8c99131abb8d97e0acd0ac2d810e1e950c
PreviousNext