8000 Cuda runtime error (8) : invalid device function at THCTensorMathPointwise.cu for latest build · Issue #1955 · pytorch/pytorch · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Cuda runtime error (8) : invalid device function at THCTensorMathPointwise.cu for latest build #1955

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
dendisuhubdy opened this issue Jun 30, 2017 · 16 comments

Comments

@dendisuhubdy
Copy link

So I've tried to compile Pytorch locally for development purposes under the (dev) anaconda environment and another one under (lisa) which I install from the instructions from pytorch.org, I've got a recent warning when running the VAE code on Pytorch tutorials when running it on the newest version.

(dev) [SLURM] suhubdyd@kepler2:~/research/models/pytorch-models/vae$ python main.py 
THCudaCheck FAIL file=/u/suhubdyd/research/dl-frameworks/pytorch/torch/lib/THC/generated/../generic/THCTensorMathPointwise.cu line=247 error=8 : invalid device function
Traceback (most recent call last):
  File "main.py", line 138, in <module>
    train(epoch)
  File "main.py", line 108, in train
    recon_batch, mu, logvar = model(data)
  File "/u/suhubdyd/.conda/envs/dev/lib/python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__
    result = self.forward(*input, **kwargs)
  File "main.py", line 71, in forward
    mu, logvar = self.encode(x.view(-1, 784))
  File "main.py", line 54, in encode
    h1 = self.relu(self.fc1(x))
  File "/u/suhubdyd/.conda/envs/dev/lib/python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__
    result = self.forward(*input, **kwargs)
  File "/u/suhubdyd/.conda/envs/dev/lib/python2.7/site-packages/torch/nn/modules/linear.py", line 54, in forward
    return self._backend.Linear.apply(input, self.weight, self.bias)
  File "/u/suhubdyd/.conda/envs/dev/lib/python2.7/site-packages/torch/nn/_functions/linear.py", line 14, in forward
    output.add_(bias.expand_as(output))
RuntimeError: cuda runtime error (8) : invalid device function at /u/suhubdyd/research/dl-frameworks/pytorch/torch/lib/THC/generated/../generic/THCTensorMathPointwise.cu:247

while it doesn't matter for the stable distribution

(lisa) [SLURM] suhubdyd@kepler2:~/research/models/pytorch-models/vae$ python main.py 
Train Epoch: 1 [0/60000 (0%)]   Loss: 549.307800
Train Epoch: 1 [1280/60000 (2%)]        Loss: 311.565613
Train Epoch: 1 [2560/60000 (4%)]        Loss: 236.338989
Train Epoch: 1 [3840/60000 (6%)]        Loss: 225.371078
Train Epoch: 1 [5120/60000 (9%)]        Loss: 208.993668
Train Epoch: 1 [6400/60000 (11%)]       Loss: 206.427368
Train Epoch: 1 [7680/60000 (13%)]       Loss: 208.580597
Train Epoch: 1 [8960/60000 (15%)]       Loss: 201.324646
Train Epoch: 1 [10240/60000 (17%)]      Loss: 191.824326
Train Epoch: 1 [11520/60000 (19%)]      Loss: 195.214188
Train Epoch: 1 [12800/60000 (21%)]      Loss: 189.770447
Train Epoch: 1 [14080/60000 (23%)]      Loss: 173.119644
Train Epoch: 1 [15360/60000 (26%)]      Loss: 179.030197
Train Epoch: 1 [16640/60000 (28%)]      Loss: 170.247345
Train Epoch: 1 [17920/60000 (30%)]      Loss: 169.193451
Train Epoch: 1 [19200/60000 (32%)]      Loss: 162.828690
Train Epoch: 1 [20480/60000 (34%)]      Loss: 158.171326
Train Epoch: 1 [21760/60000 (36%)]      Loss: 158.530518
Train Epoch: 1 [23040/60000 (38%)]      Loss: 155.896255
Train Epoch: 1 [24320/60000 (41%)]      Loss: 158.835968
Train Epoch: 1 [25600/60000 (43%)]      Loss: 152.416977
Train Epoch: 1 [26880/60000 (45%)]      Loss: 153.593964
Train Epoch: 1 [28160/60000 (47%)]      Loss: 147.944260
Train Epoch: 1 [29440/60000 (49%)]      Loss: 148.223892
Train Epoch: 1 [30720/60000 (51%)]      Loss: 145.770905
Train Epoch: 1 [32000/60000 (53%)]      Loss: 144.410706
Train Epoch: 1 [33280/60000 (55%)]      Loss: 147.592163
Train Epoch: 1 [34560/60000 (58%)]      Loss: 149.320328
@pavanky
Copy link
Contributor
pavanky commented Jun 30, 2017

Did you compile pytorch on one machine and install it on another?

@dendisuhubdy
Copy link
Author

@pavanky yes, I did a python setup.py install one machine, and I did a conda install pytorch torchvision cuda80 -c soumith on another, if that is what you mean.

@pavanky
Copy link
Contributor
pavanky commented Jun 30, 2017

@dendisuhubdy The error you mentioned happens when you build a cuda application using the wrong gpu architecture. I am not sure how that might happen in your case. What GPU are you using?

@dendisuhubdy
Copy link
Author

@pavanky Quadro P6000

@ngimel
Copy link
Collaborator
ngimel commented Jun 30, 2017

As @pavanky said, this error would appear if you compiled for one architecture, and deploy on another. It should not happen if you compile and run on the same machine (unless you have stale build directories where GPU architecture is somehow set incorrectly). Try removing you build directories (python setup.py clean) and recompiling. Also, make sure that you use CUDA 8, Pascal is not supported in earlier CUDA versions.

@pavanky
Copy link
Contributor
pavanky commented Jun 30, 2017

@dendisuhubdy Try to get the latest version of CUDA 8. @ngimel correct me if I am wrong, but I think CUDA 8 pre-release (8.0.26) didn't support Pascal

@ngimel
Copy link
Collaborator
ngimel commented Jun 30, 2017

@pavanky, it supported Pascal, but as you've pointed out in another issue, compilation with CUDA 8 pre-release is still broken.

@xuyifeng-nwpu
Copy link

@pavanky @ngimel As you said, I encounted the same ploblem .
My host's GPU is 980ti, while my other server's GPUs are k80. When I transfer my host's docker file to server and run the torch code, the same error message show. Can you solve the ploblem between two different GPU?

@soumith
Copy link
Member
soumith commented Jul 5, 2017

@xuyifeng-nwpu you can set this environment variable before compiling PyTorch in your docker:

export TORCH_CUDA_ARCH_LIST="3.0;3.5;5.0;5.2+PTX;6.0;6.1"

It will take much longer to compile, but the docker will work everywhere.

@xuyifeng-nwpu
Copy link
xuyifeng-nwpu commented Jul 6, 2017

@soumith Thank your reply.
1:My code was not PyTorch but Torch. I worry that the code of Torch cannot run in the new Pytorch envieroment, so I havn't install PyTorch. Do I need install the PyTorch? Is the Torch's code compatible with the PyTorch?
2:Where and when run the code export TORCH_CUDA_ARCH_LIST="3.0;3.5;5.0;5.2+PTX;6.0;6.1 ?
In the docker's container or the host through sudo gedit .bashrc?
What is compiling PyTorch? Is compiling PyTorch reinstalling the Torch ?
Forgive me as a novice.
3: K80's CC is 3.7. Do the numbur 3.7 add to code as
export TORCH_CUDA_ARCH_LIST="3.0;3.5;3.7;5.0;5.2+PTX;6.0;6.1

@pavanky
Copy link
Contributor
pavanky commented Jul 6, 2017

@xuyifeng-nwpu you only need to export it when you are building Torch / PyTorch. This is not needed during the runtime. And yes you will need to have 3.7 for k80. If you only care about 980 and K80, you can probably just use export TORCH_CUDA_ARCH_LIST="3.7;5.2+PTX;" to reduce the build time.

@dendisuhubdy
Copy link
Author

@pavanky @xuyifeng-nwpu @soumith anything on my side that I could do to help you guys?

@xuyifeng-nwpu
Copy link
xuyifeng-nwpu commented Jul 6, 2017

@pavanky @dendisuhubdy @soumith thank your quick reply.
1:How to "building Torch" ?
Code1: Reinstalling the Torch (luarocks install torch) or
Code2: luarocks install cutorch luarocks install cunn?

2:K80 Tesla,3.7. 980ti : maxwell 5.2.
Is right the code "export TORCH_CUDA_ARCH_LIST="3.7;5.2+PTX'?
Do the word "ptx" represents Pascal, Tesla and Mmaxwell?

3:Where can I learn the grammar of 'TORCH_CUDA_ARCH_LIST'?

4:Where run the export code?
In the host or the docker's container?

Thanks again

@soumith
Copy link
Member
soumith commented Jul 13, 2017

as everyone on this thread pretty much said, @dendisuhubdy this is pretty much because you are compiling from source on one machine, and running the source-installation on another machine. My workaround is in the comments above. Closing the issue as there's no action to be taken from the pytorch devs side.

@soumith soumith closed this as completed Jul 13, 2017
@ysnan
Copy link
ysnan commented Nov 19, 2018

@soumith Thank you for analysis the problem.
I also encounter similar problems with slightly different settings. I try to use nn.DataParallel to parallel computings two different GPUs.
I have two GPUs. One is GeForce GTX 1080 Ti with CC 6.1, and another is Quadro M5000 with CC 5.2.
With setting export TORCH_CUDA_ARCH_LIST="6.1;5.2+PTX; , the same error pops out. Could you please help to analyse the scenario within data parallel.

@soumith
Copy link
Member
soumith commented Dec 13, 2018

@YuesongNan even within DataParallel the situation should be the same. I think you might have multiple pytorch versions installed.

houseroad added a commit to houseroad/pytorch that referenced this issue May 26, 2020
…3651cd (pytorch#38203)

Summary:
Pull Request resolved: pytorch#38203

Previous import was 79a7e0df7e86e0f32e7a05f563b24a566540c18b

Included changes:
- **[20b3e10e](onnx/onnx@20b3e10e)**: Add 'ignore_index' input in the spec for SoftmaxCrossEntropyLoss and NLLLoss. (pytorch#2680) <M. Zeeshan Siddiqui>
- **[eae3eb8c](onnx/onnx@eae3eb8c)**: Use cmake GNUInstallDirs (pytorch#2661) <Gustavo Alvarez>
- **[106821e9](onnx/onnx@106821e9)**: Update sequence test case so input is not scalar and splits are specified (pytorch#2675) <Scott McKay>
- **[e094e101](onnx/onnx@e094e101)**: Remove unnecessary copies and std::move (pytorch#2684) <Changming Sun>
- **[71145275](onnx/onnx@71145275)**: Update Batchnorm test (pytorch#2674) <Lara Haidar>
- **[da13be2d](onnx/onnx@da13be2d)**: Rename OPTIONAL to OPTIONAL_VALUE (pytorch#2682) <Changming Sun>
- **[2987fa06](onnx/onnx@2987fa06)**: Adding CI for ONNX Debug mode (Linux, OSX) (pytorch#2651) <Vinitra Swamy>
- **[46fe392d](onnx/onnx@46fe392d)**: Update Pow input types in Opset 12 (pytorch#2666) <Lara Haidar>
- **[ac1caf3b](onnx/onnx@ac1caf3b)**: Change type of label tensor to int32/int64 in SoftmaxCrossEntropyLoss spec. (pytorch#2667) <M. Zeeshan Siddiqui>
- **[c2fefcbf](onnx/onnx@c2fefcbf)**: [Training] SG with Momentum Optimizer (pytorch#1959) <Wei-Sheng Chin>
- **[8d15705e](onnx/onnx@8d15705e)**: [Training] Add Adagrad optimizer operator (pytorch#1955) <Wei-Sheng Chin>
- **[94b01cdd](onnx/onnx@94b01cdd)**: Suppress a warning in unsqueeze (pytorch#2637) <Hong Xu>
- **[0582d526](onnx/onnx@0582d526)**: Fix Greater/LessOrEqual function definition (pytorch#2645) <Takeshi Watanabe>
- **[b852d819](onnx/onnx@b852d819)**: Increment version number to 1.7.0 (pytorch#2639) <Chin Huang>
- **[ff4bb553](onnx/onnx@ff4bb553)**: Regenerate Min test data (pytorch#2644) <Takeshi Watanabe>

Test Plan: ci

Reviewed By: hl475

Differential Revision: D21493165

fbshipit-source-id: c5eb2f0143f1763166f46f118ec0985b41f759b3
facebook-github-bot pushed a commit that referenced this issue May 27, 2020
…3651cd (#38203)

Summary:
Pull Request resolved: #38203

Previous import was 79a7e0df7e86e0f32e7a05f563b24a566540c18b

Included changes:
- **[20b3e10e](onnx/onnx@20b3e10e)**: Add 'ignore_index' input in the spec for SoftmaxCrossEntropyLoss and NLLLoss. (#2680) <M. Zeeshan Siddiqui>
- **[eae3eb8c](onnx/onnx@eae3eb8c)**: Use cmake GNUInstallDirs (#2661) <Gustavo Alvarez>
- **[106821e9](onnx/onnx@106821e9)**: Update sequence test case so input is not scalar and splits are specified (#2675) <Scott McKay>
- **[e094e101](onnx/onnx@e094e101)**: Remove unnecessary copies and std::move (#2684) <Changming Sun>
- **[71145275](onnx/onnx@71145275)**: Update Batchnorm test (#2674) <Lara Haidar>
- **[da13be2d](onnx/onnx@da13be2d)**: Rename OPTIONAL to OPTIONAL_VALUE (#2682) <Changming Sun>
- **[2987fa06](onnx/onnx@2987fa06)**: Adding CI for ONNX Debug mode (Linux, OSX) (#2651) <Vinitra Swamy>
- **[46fe392d](onnx/onnx@46fe392d)**: Update Pow input types in Opset 12 (#2666) <Lara Haidar>
- **[ac1caf3b](onnx/onnx@ac1caf3b)**: Change type of label tensor to int32/int64 in SoftmaxCrossEntropyLoss spec. (#2667) <M. Zeeshan Siddiqui>
- **[c2fefcbf](onnx/onnx@c2fefcbf)**: [Training] SG with Momentum Optimizer (#1959) <Wei-Sheng Chin>
- **[8d15705e](onnx/onnx@8d15705e)**: [Training] Add Adagrad optimizer operator (#1955) <Wei-Sheng Chin>
- **[94b01cdd](onnx/onnx@94b01cdd)**: Suppress a warning in unsqueeze (#2637) <Hong Xu>
- **[0582d526](onnx/onnx@0582d526)**: Fix Greater/LessOrEqual function definition (#2645) <Takeshi Watanabe>
- **[b852d819](onnx/onnx@b852d819)**: Increment version number to 1.7.0 (#2639) <Chin Huang>
- **[ff4bb553](onnx/onnx@ff4bb553)**: Regenerate Min test data (#2644) <Takeshi Watanabe>

Test Plan: ci

Reviewed By: hl475

Differential Revision: D21493165

fbshipit-source-id: 6863b289bfbf4235e36f0e2456ce44c776aaf164
houseroad added a commit to houseroad/pytorch that referenced this issue May 27, 2020
…2385a6

Summary:
Previous import was 79a7e0df7e86e0f32e7a05f563b24a566540c18b

Included changes:
- **[eae3eb8c](onnx/onnx@eae3eb8c)**: Use cmake GNUInstallDirs (pytorch#2661) <Gustavo Alvarez>
- **[106821e9](onnx/onnx@106821e9)**: Update sequence test case so input is not scalar and splits are specified (pytorch#2675) <Scott McKay>
- **[e094e101](onnx/onnx@e094e101)**: Remove unnecessary copies and std::move (pytorch#2684) <Changming Sun>
- **[71145275](onnx/onnx@71145275)**: Update Batchnorm test (pytorch#2674) <Lara Haidar>
- **[da13be2d](onnx/onnx@da13be2d)**: Rename OPTIONAL to OPTIONAL_VALUE (pytorch#2682) <Changming Sun>
- **[2987fa06](onnx/onnx@2987fa06)**: Adding CI for ONNX Debug mode (Linux, OSX) (pytorch#2651) <Vinitra Swamy>
- **[46fe392d](onnx/onnx@46fe392d)**: Update Pow input types in Opset 12 (pytorch#2666) <Lara Haidar>
- **[ac1caf3b](onnx/onnx@ac1caf3b)**: Change type of label tensor to int32/int64 in SoftmaxCrossEntropyLoss spec. (pytorch#2667) <M. Zeeshan Siddiqui>
- **[c2fefcbf](onnx/onnx@c2fefcbf)**: [Training] SG with Momentum Optimizer (pytorch#1959) <Wei-Sheng Chin>
- **[8d15705e](onnx/onnx@8d15705e)**: [Training] Add Adagrad optimizer operator (pytorch#1955) <Wei-Sheng Chin>
- **[94b01cdd](onnx/onnx@94b01cdd)**: Suppress a warning in unsqueeze (pytorch#2637) <Hong Xu>
- **[0582d526](onnx/onnx@0582d526)**: Fix Greater/LessOrEqual function definition (pytorch#2645) <Takeshi Watanabe>
- **[b852d819](onnx/onnx@b852d819)**: Increment version number to 1.7.0 (pytorch#2639) <Chin Huang>
- **[ff4bb553](onnx/onnx@ff4bb553)**: Regenerate Min test data (pytorch#2644) <Takeshi Watanabe>

Test Plan: ci

Differential Revision: D21750299

fbshipit-source-id: ba52d96806b70fd4ff56a9743b5e56879ecf2b88
facebook-github-bot pushed a commit that referenced this issue May 28, 2020
…2385a6 (#39089)

Summary:
Pull Request resolved: #39089

Previous import was 79a7e0df7e86e0f32e7a05f563b24a566540c18b

Included changes:
- **[eae3eb8c](onnx/onnx@eae3eb8c)**: Use cmake GNUInstallDirs (#2661) <Gustavo Alvarez>
- **[106821e9](onnx/onnx@106821e9)**: Update sequence test case so input is not scalar and splits are specified (#2675) <Scott McKay>
- **[e094e101](onnx/onnx@e094e101)**: Remove unnecessary copies and std::move (#2684) <Changming Sun>
- **[71145275](onnx/onnx@71145275)**: Update Batchnorm test (#2674) <Lara Haidar>
- **[da13be2d](onnx/onnx@da13be2d)**: Rename OPTIONAL to OPTIONAL_VALUE (#2682) <Changming Sun>
- **[2987fa06](onnx/onnx@2987fa06)**: Adding CI for ONNX Debug mode (Linux, OSX) (#2651) <Vinitra Swamy>
- **[46fe392d](onnx/onnx@46fe392d)**: Update Pow input types in Opset 12 (#2666) <Lara Haidar>
- **[ac1caf3b](onnx/onnx@ac1caf3b)**: Change type of label tensor to int32/int64 in SoftmaxCrossEntropyLoss spec. (#2667) <M. Zeeshan Siddiqui>
- **[c2fefcbf](onnx/onnx@c2fefcbf)**: [Training] SG with Momentum Optimizer (#1959) <Wei-Sheng Chin>
- **[8d15705e](onnx/onnx@8d15705e)**: [Training] Add Adagrad optimizer operator (#1955) <Wei-Sheng Chin>
- **[94b01cdd](onnx/onnx@94b01cdd)**: Suppress a warning in unsqueeze (#2637) <Hong Xu>
- **[0582d526](onnx/onnx@0582d526)**: Fix Greater/LessOrEqual function definition (#2645) <Takeshi Watanabe>
- **[b852d819](onnx/onnx@b852d819)**: Increment version number to 1.7.0 (#2639) <Chin Huang>
- **[ff4bb553](onnx/onnx@ff4bb553)**: Regenerate Min test data (#2644) <Takeshi Watanabe>

Test Plan: ci

Reviewed By: hl475

Differential Revision: D21750299

fbshipit-source-id: c33ec1b1e0dc65d0187e78db96d749f9037aae9c
jjsjann123 pushed a commit to jjsjann123/pytorch that referenced this issue Oct 4, 2022
jjsjann123 added a commit that referenced this issue Oct 26, 2022
7117a7e patching nvfuser conv cudnn test numerics mismatch (#2048)
65af1a4 Inserting sync for redundant parallel types is already done at the (#2023)
6ac74d1 Fix sync map (#2047)
f5bca33 Bank conflict checker improvements (#2032)
d2ca7e3 Minor update on cp.async code generation. (#1901)
d36cf61 Test file cleanup (#2040)
0b8e83f Allow non-root trivial reductions (#2037)
a2dfe40 Fix vectorize size calculation (#2035)
e040676 Use withPredicate to replace setPredicate to maintain Exprs immutable (#2025)
197221b removing ci workflow (#2034)
40e2703 Reduction rand like patch (#2031)
bc77266 Add utility for checking bank conflict of shared memory (#2029)
ddd1cf7 Add back FusionReductionWithTrivialReduction_CUDA (#2030)
fbd97e5 Revert "Cleanup trivial reduction workarounds (#2006)" (#2024)
bca20c1 Cleanup trivial reduction workarounds (#2006)
e4b6585 Trivial forwarding (#1995)
1a0e355 Fix contiguity analysis of predicates to match updated contiguity. (#1991)
a4effa6 Enable output allocation cache (#2010)
35440b7 Patching bn inference (#2016)
0f9f0b4 Add matmul benchmark (#2007)
45045cd Enable tests previously disabled due to an aliasing bug (#2005)
967aa77 Contiguous indexing for View operations (#1990)
a43cb20 Make inlining even more modular (#2004)
dc45835 Test util cleanup (#2003)
3ca21eb More strict validation (#2000)
a7a7d57 Fix build problem (#1999)
fc235b0 Just fixes comments (#1998)
482386c cleanup (#1997)
4cbe0db Improve divisible split detection (#1970)
42ccc52 Minor build fix. (#1996)
fcf8c09 Cleanup of lower_utils.cpp: Isolate out GpuLower usage (#1989)
15f2f6d Move ConcretizedBroadcastDomains to shared_ptr in GpuLower. (#1988)
8f1c7f5 Minor cleanup lower_unroll.cpp (#1994)
1d9858c Minor cleanup (#1992)
f262d9c Add support for uniform RNG (#1986)
eb1dad1 Remove non-const functions, remove GpuLower instance on build, pass in ca_map. (#1987)
634820c Add support for some empty fusion (#1981)
eabe8d8 Segment self mapping fusions (#1954)
e96aacf Enable Transpose operation (#1882)
425dce2 Add a null scheduler that helps segmenting away no-op schedules (#1835)
306d4a6 Fix canScheduleCompileTime check of transpose scheduler (#1969)
b1bd32c Minor fix (#1967)
bd93578 Enable transpose scheduler (#1927)
b7a206e Move scheduler vectorize utilities into their own file (#1959)
d9420e4 View scheduling (#1928)
c668e13 Upstream push ci fixes (#1965)
c40202b Fix dump effective bandwidth (#1962)
93505bc WAR on index mapping when exact and permissive maps differ (#1960)
45e95fd Allow splitting inner-most ID to create virtual innermost ID in transpose scheduler (#1930)
a3ecb33 Improve the comments at the beginning of index_compute.h (#1946)
f7bc341 Remove unused variables (#1955)
df3393a Some cleanup (#1957)
7d1d7c8 TVDomainGuard factory (#1953)
357ba22 Fill allocation with nan on tests (#1956)
8eafc54 Fix detection of unmappable root domains (#1952)
90a51f2 Some indexing cleanups, Add eye support (#1940)
ddc01e4 Exclude unsupported data types (#1951)
992e17c test the groups the same order as they are merged (#1949)
208262b Move detection of self mapping IDs to IterDomainGraph from (#1941)
ac4de38 Merge pull request #1945 from csarofeen/master_merge_0828
6310948 Add full, full_like, zeros, zeros_like, ones, ones_like (#1943)
aab10bc Merge remote-tracking branch 'upstream/viable/strict' into HEAD
4c254c0 Fix arange when step is negative (#1942)
89330aa Tensor factories must set the output shape as its input (#1939)

[ghstack-poisoned]
jjsjann123 added a commit that referenced this issue Oct 26, 2022
7117a7e patching nvfuser conv cudnn test numerics mismatch (#2048)
65af1a4 Inserting sync for redundant parallel types is already done at the (#2023)
6ac74d1 Fix sync map (#2047)
f5bca33 Bank conflict checker improvements (#2032)
d2ca7e3 Minor update on cp.async code generation. (#1901)
d36cf61 Test file cleanup (#2040)
0b8e83f Allow non-root trivial reductions (#2037)
a2dfe40 Fix vectorize size calculation (#2035)
e040676 Use withPredicate to replace setPredicate to maintain Exprs immutable (#2025)
197221b removing ci workflow (#2034)
40e2703 Reduction rand like patch (#2031)
bc77266 Add utility for checking bank conflict of shared memory (#2029)
ddd1cf7 Add back FusionReductionWithTrivialReduction_CUDA (#2030)
fbd97e5 Revert "Cleanup trivial reduction workarounds (#2006)" (#2024)
bca20c1 Cleanup trivial reduction workarounds (#2006)
e4b6585 Trivial forwarding (#1995)
1a0e355 Fix contiguity analysis of predicates to match updated contiguity. (#1991)
a4effa6 Enable output allocation cache (#2010)
35440b7 Patching bn inference (#2016)
0f9f0b4 Add matmul benchmark (#2007)
45045cd Enable tests previously disabled due to an aliasing bug (#2005)
967aa77 Contiguous indexing for View operations (#1990)
a43cb20 Make inlining even more modular (#2004)
dc45835 Test util cleanup (#2003)
3ca21eb More strict validation (#2000)
a7a7d57 Fix build problem (#1999)
fc235b0 Just fixes comments (#1998)
482386c cleanup (#1997)
4cbe0db Improve divisible split detection (#1970)
42ccc52 Minor build fix. (#1996)
fcf8c09 Cleanup of lower_utils.cpp: Isolate out GpuLower usage (#1989)
15f2f6d Move ConcretizedBroadcastDomains to shared_ptr in GpuLower. (#1988)
8f1c7f5 Minor cleanup lower_unroll.cpp (#1994)
1d9858c Minor cleanup (#1992)
f262d9c Add support for uniform RNG (#1986)
eb1dad1 Remove non-const functions, remove GpuLower instance on build, pass in ca_map. (#1987)
634820c Add support for some empty fusion (#1981)
eabe8d8 Segment self mapping fusions (#1954)
e96aacf Enable Transpose operation (#1882)
425dce2 Add a null scheduler that helps segmenting away no-op schedules (#1835)
306d4a6 Fix canScheduleCompileTime check of transpose scheduler (#1969)
b1bd32c Minor fix (#1967)
bd93578 Enable transpose scheduler (#1927)
b7a206e Move scheduler vectorize utilities into their own file (#1959)
d9420e4 View scheduling (#1928)
c668e13 Upstream push ci fixes (#1965)
c40202b Fix dump effective bandwidth (#1962)
93505bc WAR on index mapping when exact and permissive maps differ (#1960)
45e95fd Allow splitting inner-most ID to create virtual innermost ID in transpose scheduler (#1930)
a3ecb33 Improve the comments at the beginning of index_compute.h (#1946)
f7bc341 Remove unused variables (#1955)
df3393a Some cleanup (#1957)
7d1d7c8 TVDomainGuard factory (#1953)
357ba22 Fill allocation with nan on tests (#1956)
8eafc54 Fix detection of unmappable root domains (#1952)
90a51f2 Some indexing cleanups, Add eye support (#1940)
ddc01e4 Exclude unsupported data types (#1951)
992e17c test the groups the same order as they are merged (#1949)
208262b Move detection of self mapping IDs to IterDomainGraph from (#1941)
ac4de38 Merge pull request #1945 from csarofeen/master_merge_0828
6310948 Add full, full_like, zeros, zeros_like, ones, ones_like (#1943)
aab10bc Merge remote-tracking branch 'upstream/viable/strict' into HEAD
4c254c0 Fix arange when step is negative (#1942)
89330aa Tensor factories must set the output shape as its input (#1939)

ghstack-source-id: 7ebfc36
Pull Request resolved: #87779
jjsjann123 added a commit that referenced this issue Oct 27, 2022
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Codegen changes include (TBD):

Commits that's in this PR from the devel branch:

```
7117a7e patching nvfuser conv cudnn test numerics mismatch (#2048)
65af1a4 Inserting sync for redundant parallel types is already done at the (#2023)
6ac74d1 Fix sync map (#2047)
f5bca33 Bank conflict checker improvements (#2032)
d2ca7e3 Minor update on cp.async code generation. (#1901)
d36cf61 Test file cleanup (#2040)
0b8e83f Allow non-root trivial reductions (#2037)
a2dfe40 Fix vectorize size calculation (#2035)
e040676 Use withPredicate to replace setPredicate to maintain Exprs immutable (#2025)
197221b removing ci workflow (#2034)
40e2703 Reduction rand like patch (#2031)
bc77266 Add utility for checking bank conflict of shared memory (#2029)
ddd1cf7 Add back FusionReductionWithTrivialReduction_CUDA (#2030)
fbd97e5 Revert "Cleanup trivial reduction workarounds (#2006)" (#2024)
bca20c1 Cleanup trivial reduction workarounds (#2006)
e4b6585 Trivial forwarding (#1995)
1a0e355 Fix contiguity analysis of predicates to match updated contiguity. (#1991)
a4effa6 Enable output allocation cache (#2010)
35440b7 Patching bn inference (#2016)
0f9f0b4 Add matmul benchmark (#2007)
45045cd Enable tests previously disabled due to an aliasing bug (#2005)
967aa77 Contiguous indexing for View operations (#1990)
a43cb20 Make inlining even more modular (#2004)
dc45835 Test util cleanup (#2003)
3ca21eb More strict validation (#2000)
a7a7d57 Fix build problem (#1999)
fc235b0 Just fixes comments (#1998)
482386c cleanup (#1997)
4cbe0db Improve divisible split detection (#1970)
42ccc52 Minor build fix. (#1996)
fcf8c09 Cleanup of lower_utils.cpp: Isolate out GpuLower usage (#1989)
15f2f6d Move ConcretizedBroadcastDomains to shared_ptr in GpuLower. (#1988)
8f1c7f5 Minor cleanup lower_unroll.cpp (#1994)
1d9858c Minor cleanup (#1992)
f262d9c Add support for uniform RNG (#1986)
eb1dad1 Remove non-const functions, remove GpuLower instance on build, pass in ca_map. (#1987)
634820c Add support for some empty fusion (#1981)
eabe8d8 Segment self mapping fusions (#1954)
e96aacf Enable Transpose operation (#1882)
425dce2 Add a null scheduler that helps segmenting away no-op schedules (#1835)
306d4a6 Fix canScheduleCompileTime check of transpose scheduler (#1969)
b1bd32c Minor fix (#1967)
bd93578 Enable transpose scheduler (#1927)
b7a206e Move scheduler vectorize utilities into their own file (#1959)
d9420e4 View scheduling (#1928)
c668e13 Upstream push ci fixes (#1965)
c40202b Fix dump effective bandwidth (#1962)
93505bc WAR on index mapping when exact and permissive maps differ (#1960)
45e95fd Allow splitting inner-most ID to create virtual innermost ID in transpose scheduler (#1930)
a3ecb33 Improve the comments at the beginning of index_compute.h (#1946)
f7bc341 Remove unused variables (#1955)
df3393a Some cleanup (#1957)
7d1d7c8 TVDomainGuard factory (#1953)
357ba22 Fill allocation with nan on tests (#1956)
8eafc54 Fix detection of unmappable root domains (#1952)
90a51f2 Some indexing cleanups, Add eye support (#1940)
ddc01e4 Exclude unsupported data types (#1951)
992e17c test the groups the same order as they are merged (#1949)
208262b Move detection of self mapping IDs to IterDomainGraph from (#1941)
ac4de38 Merge pull request #1945 from csarofeen/master_merge_0828
6310948 Add full, full_like, zeros, zeros_like, ones, ones_like (#1943)
aab10bc Merge remote-tracking branch 'upstream/viable/strict' into HEAD
4c254c0 Fix arange when step is negative (#1942)
89330aa Tensor factories must set the output shape as its input (#1939)
```

[ghstack-poisoned]
jjsjann123 added a commit that referenced this issue Oct 27, 2022
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Codegen changes include (TBD):

Commits that's in this PR from the devel branch:

```
7117a7e patching nvfuser conv cudnn test numerics mismatch (#2048)
65af1a4 Inserting sync for redundant parallel types is already done at the (#2023)
6ac74d1 Fix sync map (#2047)
f5bca33 Bank conflict checker improvements (#2032)
d2ca7e3 Minor update on cp.async code generation. (#1901)
d36cf61 Test file cleanup (#2040)
0b8e83f Allow non-root trivial reductions (#2037)
a2dfe40 Fix vectorize size calculation (#2035)
e040676 Use withPredicate to replace setPredicate to maintain Exprs immutable (#2025)
197221b removing ci workflow (#2034)
40e2703 Reduction rand like patch (#2031)
bc77266 Add utility for checking bank conflict of shared memory (#2029)
ddd1cf7 Add back FusionReductionWithTrivialReduction_CUDA (#2030)
fbd97e5 Revert "Cleanup trivial reduction workarounds (#2006)" (#2024)
bca20c1 Cleanup trivial reduction workarounds (#2006)
e4b6585 Trivial forwarding (#1995)
1a0e355 Fix contiguity analysis of predicates to match updated contiguity. (#1991)
a4effa6 Enable output allocation cache (#2010)
35440b7 Patching bn inference (#2016)
0f9f0b4 Add matmul benchmark (#2007)
45045cd Enable tests previously disabled due to an aliasing bug (#2005)
967aa77 Contiguous indexing for View operations (#1990)
a43cb20 Make inlining even more modular (#2004)
dc45835 Test util cleanup (#2003)
3ca21eb More strict validation (#2000)
a7a7d57 Fix build problem (#1999)
fc235b0 Just fixes comments (#1998)
482386c cleanup (#1997)
4cbe0db Improve divisible split detection (#1970)
42ccc52 Minor build fix. (#1996)
fcf8c09 Cleanup of lower_utils.cpp: Isolate out GpuLower usage (#1989)
15f2f6d Move ConcretizedBroadcastDomains to shared_ptr in GpuLower. (#1988)
8f1c7f5 Minor cleanup lower_unroll.cpp (#1994)
1d9858c Minor cleanup (#1992)
f262d9c Add support for uniform RNG (#1986)
eb1dad1 Remove non-const functions, remove GpuLower instance on build, pass in ca_map. (#1987)
634820c Add support for some empty fusion (#1981)
eabe8d8 Segment self mapping fusions (#1954)
e96aacf Enable Transpose operation (#1882)
425dce2 Add a null scheduler that helps segmenting away no-op schedules (#1835)
306d4a6 Fix canScheduleCompileTime check of transpose scheduler (#1969)
b1bd32c Minor fix (#1967)
bd93578 Enable transpose scheduler (#1927)
b7a206e Move scheduler vectorize utilities into their own file (#1959)
d9420e4 View scheduling (#1928)
c668e13 Upstream push ci fixes (#1965)
c40202b Fix dump effective bandwidth (#1962)
93505bc WAR on index mapping when exact and permissive maps differ (#1960)
45e95fd Allow splitting inner-most ID to create virtual innermost ID in transpose scheduler (#1930)
a3ecb33 Improve the comments at the beginning of index_compute.h (#1946)
f7bc341 Remove unused variables (#1955)
df3393a Some cleanup (#1957)
7d1d7c8 TVDomainGuard factory (#1953)
357ba22 Fill allocation with nan on tests (#1956)
8eafc54 Fix detection of unmappable root domains (#1952)
90a51f2 Some indexing cleanups, Add eye support (#1940)
ddc01e4 Exclude unsupported data types (#1951)
992e17c test the groups the same order as they are merged (#1949)
208262b Move detection of self mapping IDs to IterDomainGraph from (#1941)
ac4de38 Merge pull request #1945 from csarofeen/master_merge_0828
6310948 Add full, full_like, zeros, zeros_like, ones, ones_like (#1943)
aab10bc Merge remote-tracking branch 'upstream/viable/strict' into HEAD
4c254c0 Fix arange when step is negative (#1942)
89330aa Tensor factories must set the output shape as its input (#1939)
```

[ghstack-poisoned]
jjsjann123 added a commit that referenced this issue Oct 27, 2022
7117a7e patching nvfuser conv cudnn test numerics mismatch (#2048)
65af1a4 Inserting sync for redundant parallel types is already done at the (#2023)
6ac74d1 Fix sync map (#2047)
f5bca33 Bank conflict checker improvements (#2032)
d2ca7e3 Minor update on cp.async code generation. (#1901)
d36cf61 Test file cleanup (#2040)
0b8e83f Allow non-root trivial reductions (#2037)
a2dfe40 Fix vectorize size calculation (#2035)
e040676 Use withPredicate to replace setPredicate to maintain Exprs immutable (#2025)
197221b removing ci workflow (#2034)
40e2703 Reduction rand like patch (#2031)
bc77266 Add utility for checking bank conflict of shared memory (#2029)
ddd1cf7 Add back FusionReductionWithTrivialReduction_CUDA (#2030)
fbd97e5 Revert "Cleanup trivial reduction workarounds (#2006)" (#2024)
bca20c1 Cleanup trivial reduction workarounds (#2006)
e4b6585 Trivial forwarding (#1995)
1a0e355 Fix contiguity analysis of predicates to match updated contiguity. (#1991)
a4effa6 Enable output allocation cache (#2010)
35440b7 Patching bn inference (#2016)
0f9f0b4 Add matmul benchmark (#2007)
45045cd Enable tests previously disabled due to an aliasing bug (#2005)
967aa77 Contiguous indexing for View operations (#1990)
a43cb20 Make inlining even more modular (#2004)
dc45835 Test util cleanup (#2003)
3ca21eb More strict validation (#2000)
a7a7d57 Fix build problem (#1999)
fc235b0 Just fixes comments (#1998)
482386c cleanup (#1997)
4cbe0db Improve divisible split detection (#1970)
42ccc52 Minor build fix. (#1996)
fcf8c09 Cleanup of lower_utils.cpp: Isolate out GpuLower usage (#1989)
15f2f6d Move ConcretizedBroadcastDomains to shared_ptr in GpuLower. (#1988)
8f1c7f5 Minor cleanup lower_unroll.cpp (#1994)
1d9858c Minor cleanup (#1992)
f262d9c Add support for uniform RNG (#1986)
eb1dad1 Remove non-const functions, remove GpuLower instance on build, pass in ca_map. (#1987)
634820c Add support for some empty fusion (#1981)
eabe8d8 Segment self mapping fusions (#1954)
e96aacf Enable Transpose operation (#1882)
425dce2 Add a null scheduler that helps segmenting away no-op schedules (#1835)
306d4a6 Fix canScheduleCompileTime check of transpose scheduler (#1969)
b1bd32c Minor fix (#1967)
bd93578 Enable transpose scheduler (#1927)
b7a206e Move scheduler vectorize utilities into their own file (#1959)
d9420e4 View scheduling (#1928)
c668e13 Upstream push ci fixes (#1965)
c40202b Fix dump effective bandwidth (#1962)
93505bc WAR on index mapping when exact and permissive maps differ (#1960)
45e95fd Allow splitting inner-most ID to create virtual innermost ID in transpose scheduler (#1930)
a3ecb33 Improve the comments at the beginning of index_compute.h (#1946)
f7bc341 Remove unused variables (#1955)
df3393a Some cleanup (#1957)
7d1d7c8 TVDomainGuard factory (#1953)
357ba22 Fill allocation with nan on tests (#1956)
8eafc54 Fix detection of unmappable root domains (#1952)
90a51f2 Some indexing cleanups, Add eye support (#1940)
ddc01e4 Exclude unsupported data types (#1951)
992e17c test the groups the same order as they are merged (#1949)
208262b Move detection of self mapping IDs to IterDomainGraph from (#1941)
ac4de38 Merge pull request #1945 from csarofeen/master_merge_0828
6310948 Add full, full_like, zeros, zeros_like, ones, ones_like (#1943)
aab10bc Merge remote-tracking branch 'upstream/viable/strict' into HEAD
4c254c0 Fix arange when step is negative (#1942)
89330aa Tensor factories must set the output shape as its input (#1939)

ghstack-source-id: f05ebc3
Pull Request resolved: #87779
jjsjann123 added a commit that referenced this issue Oct 27, 2022
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Codegen changes include (TBD):

Commits that's in this PR from the devel branch:

```
7117a7e patching nvfuser conv cudnn test numerics mismatch (#2048)
65af1a4 Inserting sync for redundant parallel types is already done at the (#2023)
6ac74d1 Fix sync map (#2047)
f5bca33 Bank conflict checker improvements (#2032)
d2ca7e3 Minor update on cp.async code generation. (#1901)
d36cf61 Test file cleanup (#2040)
0b8e83f Allow non-root trivial reductions (#2037)
a2dfe40 Fix vectorize size calculation (#2035)
e040676 Use withPredicate to replace setPredicate to maintain Exprs immutable (#2025)
197221b removing ci workflow (#2034)
40e2703 Reduction rand like patch (#2031)
bc77266 Add utility for checking bank conflict of shared memory (#2029)
ddd1cf7 Add back FusionReductionWithTrivialReduction_CUDA (#2030)
fbd97e5 Revert "Cleanup trivial reduction workarounds (#2006)" (#2024)
bca20c1 Cleanup trivial reduction workarounds (#2006)
e4b6585 Trivial forwarding (#1995)
1a0e355 Fix contiguity analysis of predicates to match updated contiguity. (#1991)
a4effa6 Enable output allocation cache (#2010)
35440b7 Patching bn inference (#2016)
0f9f0b4 Add matmul benchmark (#2007)
45045cd Enable tests previously disabled due to an aliasing bug (#2005)
967aa77 Contiguous indexing for View operations (#1990)
a43cb20 Make inlining even more modular (#2004)
dc45835 Test util cleanup (#2003)
3ca21eb More strict validation (#2000)
a7a7d57 Fix build problem (#1999)
fc235b0 Just fixes comments (#1998)
482386c cleanup (#1997)
4cbe0db Improve divisible split detection (#1970)
42ccc52 Minor build fix. (#1996)
fcf8c09 Cleanup of lower_utils.cpp: Isolate out GpuLower usage (#1989)
15f2f6d Move ConcretizedBroadcastDomains to shared_ptr in GpuLower. (#1988)
8f1c7f5 Minor cleanup lower_unroll.cpp (#1994)
1d9858c Minor cleanup (#1992)
f262d9c Add support for uniform RNG (#1986)
eb1dad1 Remove non-const functions, remove GpuLower instance on build, pass in ca_map. (#1987)
634820c Add support for some empty fusion (#1981)
eabe8d8 Segment self mapping fusions (#1954)
e96aacf Enable Transpose operation (#1882)
425dce2 Add a null scheduler that helps segmenting away no-op schedules (#1835)
306d4a6 Fix canScheduleCompileTime check of transpose scheduler (#1969)
b1bd32c Minor fix (#1967)
bd93578 Enable transpose scheduler (#1927)
b7a206e Move scheduler vectorize utilities into their own file (#1959)
d9420e4 View scheduling (#1928)
c668e13 Upstream push ci fixes (#1965)
c40202b Fix dump effective bandwidth (#1962)
93505bc WAR on index mapping when exact and permissive maps differ (#1960)
45e95fd Allow splitting inner-most ID to create virtual innermost ID in transpose scheduler (#1930)
a3ecb33 Improve the comments at the beginning of index_compute.h (#1946)
f7bc341 Remove unused variables (#1955)
df3393a Some cleanup (#1957)
7d1d7c8 TVDomainGuard factory (#1953)
357ba22 Fill allocation with nan on tests (#1956)
8eafc54 Fix detection of unmappable root domains (#1952)
90a51f2 Some indexing cleanups, Add eye support (#1940)
ddc01e4 Exclude unsupported data types (#1951)
992e17c test the groups the same order as they are merged (#1949)
208262b Move detection of self mapping IDs to IterDomainGraph from (#1941)
ac4de38 Merge pull request #1945 from csarofeen/master_merge_0828
6310948 Add full, full_like, zeros, zeros_like, ones, ones_like (#1943)
aab10bc Merge remote-tracking branch 'upstream/viable/strict' into HEAD
4c254c0 Fix arange when step is negative (#1942)
89330aa Tensor factories must set the output shape as its input (#1939)
```

[ghstack-poisoned]
jjsjann123 added a commit that referenced this issue Oct 31, 2022
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Codegen changes include:

* codegen improvement:
    i. allow non-root trivial reductions, allow empty/no-op fusion
    ii. fixes vectorization checks and size calculation
    iii. bank conflict handle improvement
    iv. enables transpose scheduler

* misc:
    i. CI tests failure fixes
    ii. cpp tests file clean up
    iii. trivial forwarding supports added in codegen runtime
    iv. added factory methods support in codegen


Commits that's in this PR from the devel branch:

```
7117a7e patching nvfuser conv cudnn test numerics mismatch (#2048)
65af1a4 Inserting sync for redundant parallel types is already done at the (#2023)
6ac74d1 Fix sync map (#2047)
f5bca33 Bank conflict checker improvements (#2032)
d2ca7e3 Minor update on cp.async code generation. (#1901)
d36cf61 Test file cleanup (#2040)
0b8e83f Allow non-root trivial reductions (#2037)
a2dfe40 Fix vectorize size calculation (#2035)
e040676 Use withPredicate to replace setPredicate to maintain Exprs immutable (#2025)
197221b removing ci workflow (#2034)
40e2703 Reduction rand like patch (#2031)
bc77266 Add utility for checking bank conflict of shared memory (#2029)
ddd1cf7 Add back FusionReductionWithTrivialReduction_CUDA (#2030)
fbd97e5 Revert "Cleanup trivial reduction workarounds (#2006)" (#2024)
bca20c1 Cleanup trivial reduction workarounds (#2006)
e4b6585 Trivial forwarding (#1995)
1a0e355 Fix contiguity analysis of predicates to match updated contiguity. (#1991)
a4effa6 Enable output allocation cache (#2010)
35440b7 Patching bn inference (#2016)
0f9f0b4 Add matmul benchmark (#2007)
45045cd Enable tests previously disabled due to an aliasing bug (#2005)
967aa77 Contiguous indexing for View operations (#1990)
a43cb20 Make inlining even more modular (#2004)
dc45835 Test util cleanup (#2003)
3ca21eb More strict validation (#2000)
a7a7d57 Fix build problem (#1999)
fc235b0 Just fixes comments (#1998)
482386c cleanup (#1997)
4cbe0db Improve divisible split detection (#1970)
42ccc52 Minor build fix. (#1996)
fcf8c09 Cleanup of lower_utils.cpp: Isolate out GpuLower usage (#1989)
15f2f6d Move ConcretizedBroadcastDomains to shared_ptr in GpuLower. (#1988)
8f1c7f5 Minor cleanup lower_unroll.cpp (#1994)
1d9858c Minor cleanup (#1992)
f262d9c Add support for uniform RNG (#1986)
eb1dad1 Remove non-const functions, remove GpuLower instance on build, pass in ca_map. (#1987)
634820c Add support for some empty fusion (#1981)
eabe8d8 Segment self mapping fusions (#1954)
e96aacf Enable Transpose operation (#1882)
425dce2 Add a null scheduler that helps segmenting away no-op schedules (#1835)
306d4a6 Fix canScheduleCompileTime check of transpose scheduler (#1969)
b1bd32c Minor fix (#1967)
bd93578 Enable transpose scheduler (#1927)
b7a206e Move scheduler vectorize utilities into their own file (#1959)
d9420e4 View scheduling (#1928)
c668e13 Upstream push ci fixes (#1965)
c40202b Fix dump effective bandwidth (#1962)
93505bc WAR on index mapping when exact and permissive maps differ (#1960)
45e95fd Allow splitting inner-most ID to create virtual innermost ID in transpose scheduler (#1930)
a3ecb33 Improve the comments at the beginning of index_compute.h (#1946)
f7bc341 Remove unused variables (#1955)
df3393a Some cleanup (#1957)
7d1d7c8 TVDomainGuard factory (#1953)
357ba22 Fill allocation with nan on tests (#1956)
8eafc54 Fix detection of unmappable root domains (#1952)
90a51f2 Some indexing cleanups, Add eye support (#1940)
ddc01e4 Exclude unsupported data types (#1951)
992e17c test the groups the same order as they are merged (#1949)
208262b Move detection of self mapping IDs to IterDomainGraph from (#1941)
ac4de38 Merge pull request #1945 from csarofeen/master_merge_0828
6310948 Add full, full_like, zeros, zeros_like, ones, ones_like (#1943)
aab10bc Merge remote-tracking branch 'upstream/viable/strict' into HEAD
4c254c0 Fix arange when step is negative (#1942)
89330aa Tensor factories must set the output shape as its input (#1939)
```

RUN_TORCHBENCH: nvfuser

Differential Revision: [D40869846](https://our.internmc.facebook.com/intern/diff/D40869846)

[ghstack-poisoned]
jjsjann123 added a commit that referenced this issue Oct 31, 2022
7117a7e patching nvfuser conv cudnn test numerics mismatch (#2048)
65af1a4 Inserting sync for redundant parallel types is already done at the (#2023)
6ac74d1 Fix sync map (#2047)
f5bca33 Bank conflict checker improvements (#2032)
d2ca7e3 Minor update on cp.async code generation. (#1901)
d36cf61 Test file cleanup (#2040)
0b8e83f Allow non-root trivial reductions (#2037)
a2dfe40 Fix vectorize size calculation (#2035)
e040676 Use withPredicate to replace setPredicate to maintain Exprs immutable (#2025)
197221b removing ci workflow (#2034)
40e2703 Reduction rand like patch (#2031)
bc77266 Add utility for checking bank conflict of shared memory (#2029)
ddd1cf7 Add back FusionReductionWithTrivialReduction_CUDA (#2030)
fbd97e5 Revert "Cleanup trivial reduction workarounds (#2006)" (#2024)
bca20c1 Cleanup trivial reduction workarounds (#2006)
e4b6585 Trivial forwarding (#1995)
1a0e355 Fix contiguity analysis of predicates to match updated contiguity. (#1991)
a4effa6 Enable output allocation cache (#2010)
35440b7 Patching bn inference (#2016)
0f9f0b4 Add matmul benchmark (#2007)
45045cd Enable tests previously disabled due to an aliasing bug (#2005)
967aa77 Contiguous indexing for View operations (#1990)
a43cb20 Make inlining even more modular (#2004)
dc45835 Test util cleanup (#2003)
3ca21eb More strict validation (#2000)
a7a7d57 Fix build problem (#1999)
fc235b0 Just fixes comments (#1998)
482386c cleanup (#1997)
4cbe0db Improve divisible split detection (#1970)
42ccc52 Minor build fix. (#1996)
fcf8c09 Cleanup of lower_utils.cpp: Isolate out GpuLower usage (#1989)
15f2f6d Move ConcretizedBroadcastDomains to shared_ptr in GpuLower. (#1988)
8f1c7f5 Minor cleanup lower_unroll.cpp (#1994)
1d9858c Minor cleanup (#1992)
f262d9c Add support for uniform RNG (#1986)
eb1dad1 Remove non-const functions, remove GpuLower instance on build, pass in ca_map. (#1987)
634820c Add support for some empty fusion (#1981)
eabe8d8 Segment self mapping fusions (#1954)
e96aacf Enable Transpose operation (#1882)
425dce2 Add a null scheduler that helps segmenting away no-op schedules (#1835)
306d4a6 Fix canScheduleCompileTime check of transpose scheduler (#1969)
b1bd32c Minor fix (#1967)
bd93578 Enable transpose scheduler (#1927)
b7a206e Move scheduler vectorize utilities into their own file (#1959)
d9420e4 View scheduling (#1928)
c668e13 Upstream push ci fixes (#1965)
c40202b Fix dump effective bandwidth (#1962)
93505bc WAR on index mapping when exact and permissive maps differ (#1960)
45e95fd Allow splitting inner-most ID to create virtual innermost ID in transpose scheduler (#1930)
a3ecb33 Improve the comments at the beginning of index_compute.h (#1946)
f7bc341 Remove unused variables (#1955)
df3393a Some cleanup (#1957)
7d1d7c8 TVDomainGuard factory (#1953)
357ba22 Fill allocation with nan on tests (#1956)
8eafc54 Fix detection of unmappable root domains (#1952)
90a51f2 Some indexing cleanups, Add eye support (#1940)
ddc01e4 Exclude unsupported data types (#1951)
992e17c test the groups the same order as they are merged (#1949)
208262b Move detection of self mapping IDs to IterDomainGraph from (#1941)
ac4de38 Merge pull request #1945 from csarofeen/master_merge_0828
6310948 Add full, full_like, zeros, zeros_like, ones, ones_like (#1943)
aab10bc Merge remote-tracking branch 'upstream/viable/strict' into HEAD
4c254c0 Fix arange when step is negative (#1942)
89330aa Tensor factories must set the output shape as its input (#1939)

RUN_TORCHBENCH: nvfuser

ghstack-source-id: 3b21376
Pull Request resolved: #87779
jjsjann123 added a commit that referenced this issue Oct 31, 2022
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Codegen changes include:

* codegen improvement:
    i. allow non-root trivial reductions, allow empty/no-op fusion
    ii. fixes vectorization checks and size calculation
    iii. bank conflict handle improvement
    iv. enables transpose scheduler

* misc:
    i. CI tests failure fixes
    ii. cpp tests file clean up
    iii. trivial forwarding supports added in codegen runtime
    iv. added factory methods support in codegen


Commits that's in this PR from the devel branch:

```
7117a7e patching nvfuser conv cudnn test numerics mismatch (#2048)
65af1a4 Inserting sync for redundant parallel types is already done at the (#2023)
6ac74d1 Fix sync map (#2047)
f5bca33 Bank conflict checker improvements (#2032)
d2ca7e3 Minor update on cp.async code generation. (#1901)
d36cf61 Test file cleanup (#2040)
0b8e83f Allow non-root trivial reductions (#2037)
a2dfe40 Fix vectorize size calculation (#2035)
e040676 Use withPredicate to replace setPredicate to maintain Exprs immutable (#2025)
197221b removing ci workflow (#2034)
40e2703 Reduction rand like patch (#2031)
bc77266 Add utility for checking bank conflict of shared memory (#2029)
ddd1cf7 Add back FusionReductionWithTrivialReduction_CUDA (#2030)
fbd97e5 Revert "Cleanup trivial reduction workarounds (#2006)" (#2024)
bca20c1 Cleanup trivial reduction workarounds (#2006)
e4b6585 Trivial forwarding (#1995)
1a0e355 Fix contiguity analysis of predicates to match updated contiguity. (#1991)
a4effa6 Enable output allocation cache (#2010)
35440b7 Patching bn inference (#2016)
0f9f0b4 Add matmul benchmark (#2007)
45045cd Enable tests previously disabled due to an aliasing bug (#2005)
967aa77 Contiguous indexing for View operations (#1990)
a43cb20 Make inlining even more modular (#2004)
dc45835 Test util cleanup (#2003)
3ca21eb More strict validation (#2000)
a7a7d57 Fix build problem (#1999)
fc235b0 Just fixes comments (#1998)
482386c cleanup (#1997)
4cbe0db Improve divisible split detection (#1970)
42ccc52 Minor build fix. (#1996)
fcf8c09 Cleanup of lower_utils.cpp: Isolate out GpuLower usage (#1989)
15f2f6d Move ConcretizedBroadcastDomains to shared_ptr in GpuLower. (#1988)
8f1c7f5 Minor cleanup lower_unroll.cpp (#1994)
1d9858c Minor cleanup (#1992)
f262d9c Add support for uniform RNG (#1986)
eb1dad1 Remove non-const functions, remove GpuLower instance on build, pass in ca_map. (#1987)
634820c Add support for some empty fusion (#1981)
eabe8d8 Segment self mapping fusions (#1954)
e96aacf Enable Transpose operation (#1882)
425dce2 Add a null scheduler that helps segmenting away no-op schedules (#1835)
306d4a6 Fix canScheduleCompileTime check of transpose scheduler (#1969)
b1bd32c Minor fix (#1967)
bd93578 Enable transpose scheduler (#1927)
b7a206e Move scheduler vectorize utilities into their own file (#1959)
d9420e4 View scheduling (#1928)
c668e13 Upstream push ci fixes (#1965)
c40202b Fix dump effective bandwidth (#1962)
93505bc WAR on index mapping when exact and permissive maps differ (#1960)
45e95fd Allow splitting inner-most ID to create virtual innermost ID in transpose scheduler (#1930)
a3ecb33 Improve the comments at the beginning of index_compute.h (#1946)
f7bc341 Remove unused variables (#1955)
df3393a Some cleanup (#1957)
7d1d7c8 TVDomainGuard factory (#1953)
357ba22 Fill allocation with nan on tests (#1956)
8eafc54 Fix detection of unmappable root domains (#1952)
90a51f2 Some indexing cleanups, Add eye support (#1940)
ddc01e4 Exclude unsupported data types (#1951)
992e17c test the groups the same order as they are merged (#1949)
208262b Move detection of self mapping IDs to IterDomainGraph from (#1941)
ac4de38 Merge pull request #1945 from csarofeen/master_merge_0828
6310948 Add full, full_like, zeros, zeros_like, ones, ones_like (#1943)
aab10bc Merge remote-tracking branch 'upstream/viable/strict' into HEAD
4c254c0 Fix arange when step is negative (#1942)
89330aa Tensor factories must set the output shape as its input (#1939)
```

RUN_TORCHBENCH: nvfuser

Differential Revision: [D40869846](https://our.internmc.facebook.com/intern/diff/D40869846)

[ghstack-poisoned]
jjsjann123 added a commit that referenced this issue Nov 2, 2022
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Codegen changes include:

* codegen improvement:
    i. allow non-root trivial reductions, allow empty/no-op fusion
    ii. fixes vectorization checks and size calculation
    iii. bank conflict handle improvement
    iv. enables transpose scheduler

* misc:
    i. CI tests failure fixes
    ii. cpp tests file clean up
    iii. trivial forwarding supports added in codegen runtime
    iv. added factory methods support in codegen


Commits that's in this PR from the devel branch:

```
7117a7e patching nvfuser conv cudnn test numerics mismatch (#2048)
65af1a4 Inserting sync for redundant parallel types is already done at the (#2023)
6ac74d1 Fix sync map (#2047)
f5bca33 Bank conflict checker improvements (#2032)
d2ca7e3 Minor update on cp.async code generation. (#1901)
d36cf61 Test file cleanup (#2040)
0b8e83f Allow non-root trivial reductions (#2037)
a2dfe40 Fix vectorize size calculation (#2035)
e040676 Use withPredicate to replace setPredicate to maintain Exprs immutable (#2025)
197221b removing ci workflow (#2034)
40e2703 Reduction rand like patch (#2031)
bc77266 Add utility for checking bank conflict of shared memory (#2029)
ddd1cf7 Add back FusionReductionWithTrivialReduction_CUDA (#2030)
fbd97e5 Revert "Cleanup trivial reduction workarounds
D856
 (#2006)" (#2024)
bca20c1 Cleanup trivial reduction workarounds (#2006)
e4b6585 Trivial forwarding (#1995)
1a0e355 Fix contiguity analysis of predicates to match updated contiguity. (#1991)
a4effa6 Enable output allocation cache (#2010)
35440b7 Patching bn inference (#2016)
0f9f0b4 Add matmul benchmark (#2007)
45045cd Enable tests previously disabled due to an aliasing bug (#2005)
967aa77 Contiguous indexing for View operations (#1990)
a43cb20 Make inlining even more modular (#2004)
dc45835 Test util cleanup (#2003)
3ca21eb More strict validation (#2000)
a7a7d57 Fix build problem (#1999)
fc235b0 Just fixes comments (#1998)
482386c cleanup (#1997)
4cbe0db Improve divisible split detection (#1970)
42ccc52 Minor build fix. (#1996)
fcf8c09 Cleanup of lower_utils.cpp: Isolate out GpuLower usage (#1989)
15f2f6d Move ConcretizedBroadcastDomains to shared_ptr in GpuLower. (#1988)
8f1c7f5 Minor cleanup lower_unroll.cpp (#1994)
1d9858c Minor cleanup (#1992)
f262d9c Add support for uniform RNG (#1986)
eb1dad1 Remove non-const functions, remove GpuLower instance on build, pass in ca_map. (#1987)
634820c Add support for some empty fusion (#1981)
eabe8d8 Segment self mapping fusions (#1954)
e96aacf Enable Transpose operation (#1882)
425dce2 Add a null scheduler that helps segmenting away no-op schedules (#1835)
306d4a6 Fix canScheduleCompileTime check of transpose scheduler (#1969)
b1bd32c Minor fix (#1967)
bd93578 Enable transpose scheduler (#1927)
b7a206e Move scheduler vectorize utilities into their own file (#1959)
d9420e4 View scheduling (#1928)
c668e13 Upstream push ci fixes (#1965)
c40202b Fix dump effective bandwidth (#1962)
93505bc WAR on index mapping when exact and permissive maps differ (#1960)
45e95fd Allow splitting inner-most ID to create virtual innermost ID in transpose scheduler (#1930)
a3ecb33 Improve the comments at the beginning of index_compute.h (#1946)
f7bc341 Remove unused variables (#1955)
df3393a Some cleanup (#1957)
7d1d7c8 TVDomainGuard factory (#1953)
357ba22 Fill allocation with nan on tests (#1956)
8eafc54 Fix detection of unmappable root domains (#1952)
90a51f2 Some indexing cleanups, Add eye support (#1940)
ddc01e4 Exclude unsupported data types (#1951)
992e17c test the groups the same order as they are merged (#1949)
208262b Move detection of self mapping IDs to IterDomainGraph from (#1941)
ac4de38 Merge pull request #1945 from csarofeen/master_merge_0828
6310948 Add full, full_like, zeros, zeros_like, ones, ones_like (#1943)
aab10bc Merge remote-tracking branch 'upstream/viable/strict' into HEAD
4c254c0 Fix arange when step is negative (#1942)
89330aa Tensor factories must set the output shape as its input (#1939)
```

RUN_TORCHBENCH: nvfuser

Differential Revision: [D40869846](https://our.internmc.facebook.com/intern/diff/D40869846)

[ghstack-poisoned]
jjsjann123 added a commit that referenced this issue Nov 2, 2022
7117a7e patching nvfuser conv cudnn test numerics mismatch (#2048)
65af1a4 Inserting sync for redundant parallel types is already done at the (#2023)
6ac74d1 Fix sync map (#2047)
f5bca33 Bank conflict checker improvements (#2032)
d2ca7e3 Minor update on cp.async code generation. (#1901)
d36cf61 Test file cleanup (#2040)
0b8e83f Allow non-root trivial reductions (#2037)
a2dfe40 Fix vectorize size calculation (#2035)
e040676 Use withPredicate to replace setPredicate to maintain Exprs immutable (#2025)
197221b removing ci workflow (#2034)
40e2703 Reduction rand like patch (#2031)
bc77266 Add utility for checking bank conflict of shared memory (#2029)
ddd1cf7 Add back FusionReductionWithTrivialReduction_CUDA (#2030)
fbd97e5 Revert "Cleanup trivial reduction workarounds (#2006)" (#2024)
bca20c1 Cleanup trivial reduction workarounds (#2006)
e4b6585 Trivial forwarding (#1995)
1a0e355 Fix contiguity analysis of predicates to match updated contiguity. (#1991)
a4effa6 Enable output allocation cache (#2010)
35440b7 Patching bn inference (#2016)
0f9f0b4 Add matmul benchmark (#2007)
45045cd Enable tests previously disabled due to an aliasing bug (#2005)
967aa77 Contiguous indexing for View operations (#1990)
a43cb20 Make inlining even more modular (#2004)
dc45835 Test util cleanup (#2003)
3ca21eb More strict validation (#2000)
a7a7d57 Fix build problem (#1999)
fc235b0 Just fixes comments (#1998)
482386c cleanup (#1997)
4cbe0db Improve divisible split detection (#1970)
42ccc52 Minor build fix. (#1996)
fcf8c09 Cleanup of lower_utils.cpp: Isolate out GpuLower usage (#1989)
15f2f6d Move ConcretizedBroadcastDomains to shared_ptr in GpuLower. (#1988)
8f1c7f5 Minor cleanup lower_unroll.cpp (#1994)
1d9858c Minor cleanup (#1992)
f262d9c Add support for uniform RNG (#1986)
eb1dad1 Remove non-const functions, remove GpuLower instance on build, pass in ca_map. (#1987)
634820c Add support for some empty fusion (#1981)
eabe8d8 Segment self mapping fusions (#1954)
e96aacf Enable Transpose operation (#1882)
425dce2 Add a null scheduler that helps segmenting away no-op schedules (#1835)
306d4a6 Fix canScheduleCompileTime check of transpose scheduler (#1969)
b1bd32c Minor fix (#1967)
bd93578 Enable transpose scheduler (#1927)
b7a206e Move scheduler vectorize utilities into their own file (#1959)
d9420e4 View scheduling (#1928)
c668e13 Upstream push ci fixes (#1965)
c40202b Fix dump effective bandwidth (#1962)
93505bc WAR on index mapping when exact and permissive maps differ (#1960)
45e95fd Allow splitting inner-most ID to create virtual innermost ID in transpose scheduler (#1930)
a3ecb33 Improve the comments at the beginning of index_compute.h (#1946)
f7bc341 Remove unused variables (#1955)
df3393a Some cleanup (#1957)
7d1d7c8 TVDomainGuard factory (#1953)
357ba22 Fill allocation with nan on tests (#1956)
8eafc54 Fix detection of unmappable root domains (#1952)
90a51f2 Some indexing cleanups, Add eye support (#1940)
ddc01e4 Exclude unsupported data types (#1951)
992e17c test the groups the same order as they are merged (#1949)
208262b Move detection of self mapping IDs to IterDomainGraph from (#1941)
ac4de38 Merge pull request #1945 from csarofeen/master_merge_0828
6310948 Add full, full_like, zeros, zeros_like, ones, ones_like (#1943)
aab10bc Merge remote-tracking branch 'upstream/viable/strict' into HEAD
4c254c0 Fix arange when step is negative (#1942)
89330aa Tensor factories must set the output shape as its input (#1939)

RUN_TORCHBENCH: nvfuser

ghstack-source-id: 424af8e
Pull Request resolved: #87779
jjsjann123 added a commit that referenced this issue Nov 2, 2022
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Codegen changes include:

* codegen improvement:
    i. allow non-root trivial reductions, allow empty/no-op fusion
    ii. fixes vectorization checks and size calculation
    iii. bank conflict handle improvement
    iv. enables transpose scheduler

* misc:
    i. CI tests failure fixes
    ii. cpp tests file clean up
    iii. trivial forwarding supports added in codegen runtime
    iv. added factory methods support in codegen


Commits that's in this PR from the devel branch:

```
7117a7e patching nvfuser conv cudnn test numerics mismatch (#2048)
65af1a4 Inserting sync for redundant parallel types is already done at the (#2023)
6ac74d1 Fix sync map (#2047)
f5bca33 Bank conflict checker improvements (#2032)
d2ca7e3 Minor update on cp.async code generation. (#1901)
d36cf61 Test file cleanup (#2040)
0b8e83f Allow non-root trivial reductions (#2037)
a2dfe40 Fix vectorize size calculation (#2035)
e040676 Use withPredicate to replace setPredicate to maintain Exprs immutable (#2025)
197221b removing ci workflow (#2034)
40e2703 Reduction rand like patch (#2031)
bc77266 Add utility for checking bank conflict of shared memory (#2029)
ddd1cf7 Add back FusionReductionWithTrivialReduction_CUDA (#2030)
fbd97e5 Revert "Cleanup trivial reduction workarounds (#2006)" (#2024)
bca20c1 Cleanup trivial reduction workarounds (#2006)
e4b6585 Trivial forwarding (#1995)
1a0e355 Fix contiguity analysis of predicates to match updated contiguity. (#1991)
a4effa6 Enable output allocation cache (#2010)
35440b7 Patching bn inference (#2016)
0f9f0b4 Add matmul benchmark (#2007)
45045cd Enable tests previously disabled due to an aliasing bug (#2005)
967aa77 Contiguous indexing for View operations (#1990)
a43cb20 Make inlining even more modular (#2004)
dc45835 Test util cleanup (#2003)
3ca21eb More strict validation (#2000)
a7a7d57 Fix build problem (#1999)
fc235b0 Just fixes comments (#1998)
482386c cleanup (#1997)
4cbe0db Improve divisible split detection (#1970)
42ccc52 Minor build fix. (#1996)
fcf8c09 Cleanup of lower_utils.cpp: Isolate out GpuLower usage (#1989)
15f2f6d Move ConcretizedBroadcastDomains to shared_ptr in GpuLower. (#1988)
8f1c7f5 Minor cleanup lower_unroll.cpp (#1994)
1d9858c Minor cleanup (#1992)
f262d9c Add support for uniform RNG (#1986)
eb1dad1 Remove non-const functions, remove GpuLower instance on build, pass in ca_map. (#1987)
634820c Add support for some empty fusion (#1981)
eabe8d8 Segment self mapping fusions (#1954)
e96aacf Enable Transpose operation (#1882)
425dce2 Add a null scheduler that helps segmenting away no-op schedules (#1835)
306d4a6 Fix canScheduleCompileTime check of transpose scheduler (#1969)
b1bd32c Minor fix (#1967)
bd93578 Enable transpose scheduler (#1927)
b7a206e Move scheduler vectorize utilities into their own file (#1959)
d9420e4 View scheduling (#1928)
c668e13 Upstream push ci fixes (#1965)
c40202b Fix dump effective bandwidth (#1962)
93505bc WAR on index mapping when exact and permissive maps differ (#1960)
45e95fd Allow splitting inner-most ID to create virtual innermost ID in transpose scheduler (#1930)
a3ecb33 Improve the comments at the beginning of index_compute.h (#1946)
f7bc341 Remove unused variables (#1955)
df3393a Some cleanup (#1957)
7d1d7c8 TVDomainGuard factory (#1953)
357ba22 Fill allocation with nan on tests (#1956)
8eafc54 Fix detection of unmappable root domains (#1952)
90a51f2 Some indexing cleanups, Add eye support (#1940)
ddc01e4 Exclude unsupported data types (#1951)
992e17c test the groups the same order as they are merged (#1949)
208262b Move detection of self mapping IDs to IterDomainGraph from (#1941)
ac4de38 Merge pull request #1945 from csarofeen/master_merge_0828
6310948 Add full, full_like, zeros, zeros_like, ones, ones_like (#1943)
aab10bc Merge remote-tracking branch 'upstream/viable/strict' into HEAD
4c254c0 Fix arange when step is negative (#1942)
89330aa Tensor factories must set the output shape as its input (#1939)
```

RUN_TORCHBENCH: nvfuser

Differential Revision: [D40869846](https://our.internmc.facebook.com/intern/diff/D40869846)

[ghstack-poisoned]
pytorchmergebot pushed a commit that referenced this issue Nov 4, 2022
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Codegen changes include:

* codegen improvement:
    i. allow non-root trivial reductions, allow empty/no-op fusion
    ii. fixes vectorization checks and size calculation
    iii. bank conflict handle improvement
    iv. enables transpose scheduler

* misc:
    i. CI tests failure fixes
    ii. cpp tests file clean up
    iii. trivial forwarding supports added in codegen runtime
    iv. added factory methods support in codegen

Commits that's in this PR from the devel branch:

```
7117a7e patching nvfuser conv cudnn test numerics mismatch (#2048)
65af1a4 Inserting sync for redundant parallel types is already done at the (#2023)
6ac74d1 Fix sync map (#2047)
f5bca33 Bank conflict checker improvements (#2032)
d2ca7e3 Minor update on cp.async code generation. (#1901)
d36cf61 Test file cleanup (#2040)
0b8e83f Allow non-root trivial reductions (#2037)
a2dfe40 Fix vectorize size calculation (#2035)
e040676 Use withPredicate to replace setPredicate to maintain Exprs immutable (#2025)
197221b removing ci workflow (#2034)
40e2703 Reduction rand like patch (#2031)
bc77266 Add utility for checking bank conflict of shared memory (#2029)
ddd1cf7 Add back FusionReductionWithTrivialReduction_CUDA (#2030)
fbd97e5 Revert "Cleanup trivial reduction workarounds (#2006)" (#2024)
bca20c1 Cleanup trivial reduction workarounds (#2006)
e4b6585 Trivial forwarding (#1995)
1a0e355 Fix contiguity analysis of predicates to match updated contiguity. (#1991)
a4effa6 Enable output allocation cache (#2010)
35440b7 Patching bn inference (#2016)
0f9f0b4 Add matmul benchmark (#2007)
45045cd Enable tests previously disabled due to an aliasing bug (#2005)
967aa77 Contiguous indexing for View operations (#1990)
a43cb20 Make inlining even more modular (#2004)
dc45835 Test util cleanup (#2003)
3ca21eb More strict validation (#2000)
a7a7d57 Fix build problem (#1999)
fc235b0 Just fixes comments (#1998)
482386c cleanup (#1997)
4cbe0db Improve divisible split detection (#1970)
42ccc52 Minor build fix. (#1996)
fcf8c09 Cleanup of lower_utils.cpp: Isolate out GpuLower usage (#1989)
15f2f6d Move ConcretizedBroadcastDomains to shared_ptr in GpuLower. (#1988)
8f1c7f5 Minor cleanup lower_unroll.cpp (#1994)
1d9858c Minor cleanup (#1992)
f262d9c Add support for uniform RNG (#1986)
eb1dad1 Remove non-const functions, remove GpuLower instance on build, pass in ca_map. (#1987)
634820c Add support for some empty fusion (#1981)
eabe8d8 Segment self mapping fusions (#1954)
e96aacf Enable Transpose operation (#1882)
425dce2 Add a null scheduler that helps segmenting away no-op schedules (#1835)
306d4a6 Fix canScheduleCompileTime check of transpose scheduler (#1969)
b1bd32c Minor fix (#1967)
bd93578 Enable transpose scheduler (#1927)
b7a206e Move scheduler vectorize utilities into their own file (#1959)
d9420e4 View scheduling (#1928)
c668e13 Upstream push ci fixes (#1965)
c40202b Fix dump effective bandwidth (#1962)
93505bc WAR on index mapping when exact and permissive maps differ (#1960)
45e95fd Allow splitting inner-most ID to create virtual innermost ID in transpose scheduler (#1930)
a3ecb33 Improve the comments at the beginning of index_compute.h (#1946)
f7bc341 Remove unused variables (#1955)
df3393a Some cleanup (#1957)
7d1d7c8 TVDomainGuard factory (#1953)
357ba22 Fill allocation with nan on tests (#1956)
8eafc54 Fix detection of unmappable root domains (#1952)
90a51f2 Some indexing cleanups, Add eye support (#1940)
ddc01e4 Exclude unsupported data types (#1951)
992e17c test the groups the same order as they are merged (#1949)
208262b Move detection of self mapping IDs to IterDomainGraph from (#1941)
ac4de38 Merge pull request #1945 from csarofeen/master_merge_0828
6310948 Add full, full_like, zeros, zeros_like, ones, ones_like (#1943)
aab10bc Merge remote-tracking branch 'upstream/viable/strict' into HEAD
4c254c0 Fix arange when step is negative (#1942)
89330aa Tensor factories must set the output shape as its input (#1939)
```

RUN_TORCHBENCH: nvfuser

Differential Revision: [D40869846](https://our.internmc.facebook.com/intern/diff/D40869846)
Pull Request resolved: #87779
Approved by: https://github.com/davidberard98
kulinseth pushed a commit to kulinseth/pytorch that referenced this issue Nov 5, 2022
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Codegen changes include:

* codegen improvement:
    i. allow non-root trivial reductions, allow empty/no-op fusion
    ii. fixes vectorization checks and size calculation
    iii. bank conflict handle improvement
    iv. enables transpose scheduler

* misc:
    i. CI tests failure fixes
    ii. cpp tests file clean up
    iii. trivial forwarding supports added in codegen runtime
    iv. added factory methods support in codegen

Commits that's in this PR from the devel branch:

```
7117a7e patching nvfuser conv cudnn test numerics mismatch (pytorch#2048)
65af1a4 Inserting sync for redundant parallel types is already done at the (pytorch#2023)
6ac74d1 Fix sync map (pytorch#2047)
f5bca33 Bank conflict checker improvements (pytorch#2032)
d2ca7e3 Minor update on cp.async code generation. (pytorch#1901)
d36cf61 Test file cleanup (pytorch#2040)
0b8e83f Allow non-root trivial reductions (pytorch#2037)
a2dfe40 Fix vectorize size calculation (pytorch#2035)
e040676 Use withPredicate to replace setPredicate to maintain Exprs immutable (pytorch#2025)
197221b removing ci workflow (pytorch#2034)
40e2703 Reduction rand like patch (pytorch#2031)
bc77266 Add utility for checking bank conflict of shared memory (pytorch#2029)
ddd1cf7 Add back FusionReductionWithTrivialReduction_CUDA (pytorch#2030)
fbd97e5 Revert "Cleanup trivial reduction workarounds (pytorch#2006)" (pytorch#2024)
bca20c1 Cleanup trivial reduction workarounds (pytorch#2006)
e4b6585 Trivial forwarding (pytorch#1995)
1a0e355 Fix contiguity analysis of predicates to match updated contiguity. (pytorch#1991)
a4effa6 Enable output allocation cache (pytorch#2010)
35440b7 Patching bn inference (pytorch#2016)
0f9f0b4 Add matmul benchmark (pytorch#2007)
45045cd Enable tests previously disabled due to an aliasing bug (pytorch#2005)
967aa77 Contiguous indexing for View operations (pytorch#1990)
a43cb20 Make inlining even more modular (pytorch#2004)
dc45835 Test util cleanup (pytorch#2003)
3ca21eb More strict validation (pytorch#2000)
a7a7d57 Fix build problem (pytorch#1999)
fc235b0 Just fixes comments (pytorch#1998)
482386c cleanup (pytorch#1997)
4cbe0db Improve divisible split detection (pytorch#1970)
42ccc52 Minor build fix. (pytorch#1996)
fcf8c09 Cleanup of lower_utils.cpp: Isolate out GpuLower usage (pytorch#1989)
15f2f6d Move ConcretizedBroadcastDomains to shared_ptr in GpuLower. (pytorch#1988)
8f1c7f5 Minor cleanup lower_unroll.cpp (pytorch#1994)
1d9858c Minor cleanup (pytorch#1992)
f262d9c Add support for uniform RNG (pytorch#1986)
eb1dad1 Remove non-const functions, remove GpuLower instance on build, pass in ca_map. (pytorch#1987)
634820c Add support for some empty fusion (pytorch#1981)
eabe8d8 Segment self mapping fusions (pytorch#1954)
e96aacf Enable Transpose operation (pytorch#1882)
425dce2 Add a null scheduler that helps segmenting away no-op schedules (pytorch#1835)
306d4a6 Fix canScheduleCompileTime check of transpose scheduler (pytorch#1969)
b1bd32c Minor fix (pytorch#1967)
bd93578 Enable transpose scheduler (pytorch#1927)
b7a206e Move scheduler vectorize utilities into their own file (pytorch#1959)
d9420e4 View scheduling (pytorch#1928)
c668e13 Upstream push ci fixes (pytorch#1965)
c40202b Fix dump effective bandwidth (pytorch#1962)
93505bc WAR on index mapping when exact and permissive maps differ (pytorch#1960)
45e95fd Allow splitting inner-most ID to create virtual innermost ID in transpose scheduler (pytorch#1930)
a3ecb33 Improve the comments at the beginning of index_compute.h (pytorch#1946)
f7bc341 Remove unused variables (pytorch#1955)
df3393a Some cleanup (pytorch#1957)
7d1d7c8 TVDomainGuard factory (pytorch#1953)
357ba22 Fill allocation with nan on tests (pytorch#1956)
8eafc54 Fix detection of unmappable root domains (pytorch#1952)
90a51f2 Some indexing cleanups, Add eye support (pytorch#1940)
ddc01e4 Exclude unsupported data types (pytorch#1951)
992e17c test the groups the same order as they are merged (pytorch#1949)
208262b Move detection of self mapping IDs to IterDomainGraph from (pytorch#1941)
ac4de38 Merge pull request pytorch#1945 from csarofeen/master_merge_0828
6310948 Add full, full_like, zeros, zeros_like, ones, ones_like (pytorch#1943)
aab10bc Merge remote-tracking branch 'upstream/viable/strict' into HEAD
4c254c0 Fix arange when step is negative (pytorch#1942)
89330aa Tensor factories must set the output shape as its input (pytorch#1939)
```

RUN_TORCHBENCH: nvfuser

Differential Revision: [D40869846](https://our.internmc.facebook.com/intern/diff/D40869846)
Pull Request resolved: pytorch#87779
Approved by: https://github.com/davidberard98
kulinseth pushed a commit to kulinseth/pytorch that referenced this issue Dec 10, 2022
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Codegen changes include:

* codegen improvement:
    i. allow non-root trivial reductions, allow empty/no-op fusion
    ii. fixes vectorization checks and size calculation
    iii. bank conflict handle improvement
    iv. enables transpose scheduler

* misc:
    i. CI tests failure fixes
    ii. cpp tests file clean up
    iii. trivial forwarding supports added in codegen runtime
    iv. added factory methods support in codegen

Commits that's in this PR from the devel branch:

```
7117a7e patching nvfuser conv cudnn test numerics mismatch (pytorch#2048)
65af1a4 Inserting sync for redundant parallel types is already done at the (pytorch#2023)
6ac74d1 Fix sync map (pytorch#2047)
f5bca33 Bank conflict checker improvements (pytorch#2032)
d2ca7e3 Minor update on cp.async code generation. (pytorch#1901)
d36cf61 Test file cleanup (pytorch#2040)
0b8e83f Allow non-root trivial reductions (pytorch#2037)
a2dfe40 Fix vectorize size calculation (pytorch#2035)
e040676 Use withPredicate to replace setPredicate to maintain Exprs immutable (pytorch#2025)
197221b removing ci workflow (pytorch#2034)
40e2703 Reduction rand like patch (pytorch#2031)
bc77266 Add utility for checking bank conflict of shared memory (pytorch#2029)
ddd1cf7 Add back FusionReductionWithTrivialReduction_CUDA (pytorch#2030)
fbd97e5 Revert "Cleanup trivial reduction workarounds (pytorch#2006)" (pytorch#2024)
bca20c1 Cleanup trivial reduction workarounds (pytorch#2006)
e4b6585 Trivial forwarding (pytorch#1995)
1a0e355 Fix contiguity analysis of predicates to match updated contiguity. (pytorch#1991)
a4effa6 Enable output allocation cache (pytorch#2010)
35440b7 Patching bn inference (pytorch#2016)
0f9f0b4 Add matmul benchmark (pytorch#2007)
45045cd Enable tests previously disabled due to an aliasing bug (pytorch#2005)
967aa77 Contiguous indexing for View operations (pytorch#1990)
a43cb20 Make inlining even more modular (pytorch#2004)
dc45835 Test util cleanup (pytorch#2003)
3ca21eb More strict validation (pytorch#2000)
a7a7d57 Fix build problem (pytorch#1999)
fc235b0 Just fixes comments (pytorch#1998)
482386c cleanup (pytorch#1997)
4cbe0db Improve divisible split detection (pytorch#1970)
42ccc52 Minor build fix. (pytorch#1996)
fcf8c09 Cleanup of lower_utils.cpp: Isolate out GpuLower usage (pytorch#1989)
15f2f6d Move ConcretizedBroadcastDomains to shared_ptr in GpuLower. (pytorch#1988)
8f1c7f5 Minor cleanup lower_unroll.cpp (pytorch#1994)
1d9858c Minor cleanup (pytorch#1992)
f262d9c Add support for uniform RNG (pytorch#1986)
eb1dad1 Remove non-const functions, remove GpuLower instance on build, pass in ca_map. (pytorch#1987)
634820c Add support for some empty fusion (pytorch#1981)
eabe8d8 Segment self mapping fusions (pytorch#1954)
e96aacf Enable Transpose operation (pytorch#1882)
425dce2 Add a null scheduler that helps segmenting away no-op schedules (pytorch#1835)
306d4a6 Fix canScheduleCompileTime check of transpose scheduler (pytorch#1969)
b1bd32c Minor fix (pytorch#1967)
bd93578 Enable transpose scheduler (pytorch#1927)
b7a206e Move scheduler vectorize utilities into their own file (pytorch#1959)
d9420e4 View scheduling (pytorch#1928)
c668e13 Upstream push ci fixes (pytorch#1965)
c40202b Fix dump effective bandwidth (pytorch#1962)
93505bc WAR on index mapping when exact and permissive maps differ (pytorch#1960)
45e95fd Allow splitting inner-most ID to create virtual innermost ID in transpose scheduler (pytorch#1930)
a3ecb33 Improve the comments at the beginning of index_compute.h (pytorch#1946)
f7bc341 Remove unused variables (pytorch#1955)
df3393a Some cleanup (pytorch#1957)
7d1d7c8 TVDomainGuard factory (pytorch#1953)
357ba22 Fill allocation with nan on tests (pytorch#1956)
8eafc54 Fix detection of unmappable root domains (pytorch#1952)
90a51f2 Some indexing cleanups, Add eye support (pytorch#1940)
ddc01e4 Exclude unsupported data types (pytorch#1951)
992e17c test the groups the same order as they are merged (pytorch#1949)
208262b Move detection of self mapping IDs to IterDomainGraph from (pytorch#1941)
ac4de38 Merge pull request pytorch#1945 from csarofeen/master_merge_0828
6310948 Add full, full_like, zeros, zeros_like, ones, ones_like (pytorch#1943)
aab10bc Merge remote-tracking branch 'upstream/viable/strict' into HEAD
4c254c0 Fix arange when step is negative (pytorch#1942)
89330aa Tensor factories must set the output shape as its input (pytorch#1939)
```

RUN_TORCHBENCH: nvfuser

Differential Revision: [D40869846](https://our.internmc.facebook.com/intern/diff/D40869846)
Pull Request resolved: pytorch#87779
Approved by: https://github.com/davidberard98
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants
0