Tags: AliJahan/pytorch
Tags
[JIT] Improve class type annotation inference (pytorch#46422) **Summary** In `try_ann_to_type`, if an annotation has an attribute named `__torch_script_class__`, it is assumed to be a TorchScript class that has already been scripted. However, if it is a class that extends another class, this code path causes a crash because it looks up the JIT type for the class by name in the compilation unit. This JIT type obviously cannot exist because inheritance is not supported. This commit fixes this by looking up the qualified name of a class in torch.jit._state._script_class in order to ascertain whether it has already been scripted (instead of looking for a `__torch_script_class__` attribute on the class object. **Test Plan** This commit adds a unit test consisting of the code sample from the issue that reported this problem. **Fixes** This commit fixes pytorch#45860. ghstack-source-id: 6fe19a4 Pull Request resolved: pytorch#45940
[1.7] Remove torch.vmap (pytorch#45571) torch.vmap is a prototype feature and should not be in the stable binary. This PR: - Removes the `torch.vmap` API - Removes the documentation entry for `torch.vmap` - Changes the vmap tests to use an internal API instead of `torch.vmap`. Test Plan: - Tested locally (test_torch, test_type_hints, test_vmap), but also wait for CI.
Revert "[release/1.6] .circleci: Don't use SCCACHE for windows releas… …e builds (pytorch#42024)" This reverts commit 994b37b.
[release/1.6] [JIT] Dont include view ops in autodiff graphs (pytorch… …#42029) * Dont include view ops in autodiff graphs * skip view ops in autodiff testing * two more tests * appease calng format * Pacify clang-format Co-authored-by: eellison <eellison@fb.com> Co-authored-by: Nikita Shulga <nikita.shulga@gmail.com>
[jit] fix tuple alias analysis (pytorch#41992) Previously when analyzing a TupleConstruct, we ignored the aliasing information of the inputs and simply marked all elements of the returned tuple as wildcards. But since we can fully reason about the contents of a tuple statically, we should be able to assign them aliasing information. This analysis was not only incomplete but produced incorrect results, since if `a` is not a wildcard, `a noalias wilcard`. So if we looked at `tuple(a)` and reported the aliasing info as `tuple(wildcard)`, then `tuple[0] noalias a`, which is...wrong.
scatter/gather - check that inputs are of the same dimensionality (py… …torch#41890) Co-authored-by: Nikita Vedeneev <nik@quansight.com>
Update pthreadpool to pthreadpool:029c88620802e1361ccf41d1970bd5b07fd… …6b7bb. (pytorch#40524) (pytorch#41190) Summary: Pull Request resolved: pytorch#40524 Reviewed By: ezyang Differential Revision: D22215742 Pulled By: AshkanAliabadi fbshipit-source-id: ef594e0901337a92b21ddd44e554da66c723eb7c
Release GIL during DDP construction. (pytorch#40877) Summary: Pull Request resolved: pytorch#40495 As part of debugging flaky ddp_under_dist_autograd tests, I realized we were running into the following deadlock. 1) Rank 0 would go into DDP construction, hold GIL and wait for broadcast in DDP construction. 2) Rank 3 is a little slower and performs an RRef fetch call before the DDP construction. 3) The RRef fetch call is done on Rank 0 and tries to acquire GIL. 4) We now have a deadlock since Rank 0 is waiting for Rank 3 to enter the collective and Rank 3 is waiting for Rank 0 to release GIL. ghstack-source-id: 106534442 Test Plan: 1) Ran ddp_under_dist_autograd 500 times. 2) waitforbuildbot Differential Revision: D22205180 fbshipit-source-id: 6afd55342e801b9edb9591ff25158a244a8ea66a Co-authored-by: Pritam Damania <pritam.damania@fb.com>
.circleci: Fix upload to backup directory Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
PreviousNext