-
Notifications
You must be signed in to change notification settings - Fork 24.3k
[Feature Request] Layer Normalization #1959
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I use this: class LayerNorm(nn.Module):
def __init__(self, features, eps=1e-6):
super().__init__()
self.gamma = nn.Parameter(torch.ones(features))
self.beta = nn.Parameter(torch.zeros(features))
self.eps = eps
def forward(self, x):
mean = x.mean(-1, keepdim=True)
std = x.std(-1, keepdim=True)
return self.gamma * (x - mean) / (std + self.eps) + self.beta |
Nice @jekbradbury - though I presume 2D versions etc. would have to be adapted to work on a per-channel basis, like batch norm? Either way, it's still probably worth having this in the library with a test. |
Yeah, as usual I'm more or less forgetting that images exist. This should at least generalize to a couple different situations in NLP though, or really anywhere that channels are the last dimension and you want separate moments for every {timestep, y-position, etc.}. |
This is the Layer normalization implementation in tensorflow. So How I can transfer it to pytorch implementation , how to transfer the nn.moments and etc.. def Layernorm(name, norm_axes, inputs):
mean, var = tf.nn.moments(inputs, norm_axes, keep_dims=True)
# Assume the 'neurons' axis is the first of norm_axes. This is the case for fully-connected and BCHW conv layers.
n_neurons = inputs.get_shape().as_list()[norm_axes[0]]
offset = lib.param(name+'.offset', np.zeros(n_neurons, dtype='float32'))
scale = lib.param(name+'.scale', np.ones(n_neurons, dtype='float32'))
# Add broadcasting dims to offset and scale (e.g. BCHW conv data)
offset = tf.reshape(offset, [-1] + [1 for i in xrange(len(norm_axes)-1)])
scale = tf.reshape(scale, [-1] + [1 for i in xrange(len(norm_axes)-1)])
result = tf.nn.batch_normalization(inputs, mean, var, offset, scale, 1e-5)
return result |
I have created a naive implementation of LayerNorm for GRU recently. It makes things quite a bit slower, mainly because everything is being done manually, but hopefully it can be helpful. Also note, that I don't divide hidden state by std, because I usually initialize the hidden state with zeros. class LayerNormGRUCell(nn.GRUCell):
def __init__(self, input_size, hidden_size, bias=True):
super(LayerNormGRUCell, self).__init__(input_size, hidden_size, bias)
self.gamma_ih = nn.Parameter(torch.ones(3 * self.hidden_size))
self.gamma_hh = nn.Parameter(torch.ones(3 * self.hidden_size))
self.eps = 0
def _layer_norm_x(self, x, g, b):
mean = x.mean(1).expand_as(x)
std = x.std(1).expand_as(x)
return g.expand_as(x) * ((x - mean) / (std + self.eps)) + b.expand_as(x)
def _layer_norm_h(self, x, g, b):
mean = x.mean(1).expand_as(x)
return g.expand_as(x) * (x - mean) + b.expand_as(x)
def forward(self, x, h):
ih_rz = self._layer_norm_x(
torch.mm(x, self.weight_ih.narrow(0, 0, 2 * self.hidden_size).transpose(0, 1)),
self.gamma_ih.narrow(0, 0, 2 * self.hidden_size),
self.bias_ih.narrow(0, 0, 2 * self.hidden_size))
hh_rz = self._layer_norm_h(
torch.mm(h, self.weight_hh.narrow(0, 0, 2 * self.hidden_size).transpose(0, 1)),
self.gamma_hh.narrow(0, 0, 2 * self.hidden_size),
self.bias_hh.narrow(0, 0, 2 * self.hidden_size))
rz = torch.sigmoid(ih_rz + hh_rz)
r = rz.narrow(1, 0, self.hidden_size)
z = rz.narrow(1, self.hidden_size, self.hidden_size)
ih_n = self._layer_norm_x(
torch.mm(x, self.weight_ih.narrow(0, 2 * self.hidden_size, self.hidden_size).transpose(0, 1)),
self.gamma_ih.narrow(0, 2 * self.hidden_size, self.hidden_size),
self.bias_ih.narrow(0, 2 * self.hidden_size, self.hidden_size))
hh_n = self._layer_norm_h(
torch.mm(h, self.weight_hh.narrow(0, 2 * self.hidden_size, self.hidden_size).transpose(0, 1)),
self.gamma_hh.narrow(0, 2 * self.hidden_size, self.hidden_size),
self.bias_hh.narrow(0, 2 * self.hidden_size, self.hidden_size))
n = torch.tanh(ih_n + r * hh_n)
h = (1 - z) * n + z * h
return h
class LayerNormGRU(nn.Module):
def __init__(self, input_size, hidden_size, bias=True):
super(LayerNormGRU, self).__init__()
self.cell = LayerNormGRUCell(input_size, hidden_size, bias)
self.weight_ih_l0 = self.cell.weight_ih
self.weight_hh_l0 = self.cell.weight_hh
self.bias_ih_l0 = self.cell.bias_ih
self.bias_hh_l0 = self.cell.bias_hh
def forward(self, xs, h):
h = h.squeeze(0)
ys = []
for i in range(xs.size(0)):
x = xs.narrow(0, i, 1).squeeze(0)
h = self.cell(x, h)
ys.append(h.unsqueeze(0))
y = torch.cat(ys, 0)
h = h.unsqueeze(0)
return y, h |
@jekbradbury Hi, Jek, is there any more efficient implementation of LayerNorm, I use your code, it is very cool, but as there are more than 50 LN in my network, I feel it becomes the bottemneck of the speed. |
@jekbradbury, this is the LayerNorm for >=2D modified from your code class LayerNorm(nn.Module):
def __init__(self, num_features, eps=1e-5, affine=True):
super(LayerNorm, self).__init__()
self.num_features = num_features
self.affine = affine
self.eps = eps
if self.affine:
self.gamma = nn.Parameter(torch.Tensor(num_features).uniform_())
self.beta = nn.Parameter(torch.zeros(num_features))
def forward(self, x):
shape = [-1] + [1] * (x.dim() - 1)
mean = x.view(x.size(0), -1).mean(1).view(*shape)
std = x.view(x.size(0), -1).std(1).view(*shape)
y = (x - mean) / (std + self.eps)
if self.affine:
shape = [1, -1] + [1] * (x.dim() - 2)
y = self.gamma.view(*shape) * y + self.beta.view(*shape)
return y |
@D-X-Y @jekbradbury I can also confirm that a simple implementation of Layernorm seems to be quite slow on Pytorch (slows down my training by at least 3x). Maybe the mean and standard deviation computation could be combined into a single CUDA kernel? |
Just a quick note: @LynnHo 's version calculates the mean and std over all channels combined, but the beta and gamma are per-channel. (This is probably intended, but I was confused for a moment.) |
@t-vi I just follow how tf-slim does. However, this implementation is unbearably slow. Anybody know more effective implementation? |
There seems to be one for TF, maybe you can use that as inspiration: https://github.com/MycChiu/fast-LayerNorm-TF . |
@LynnHo I am new to Pytorch and just going through the layer-normalization code. The mean and std will return a single value even for images in CNN? As x.size(0) represents batch size which is one in this case. Just bit confused < 8000 /td> |
@Blade6570 according to the Layer Normalization paper, yes the mean and standard deviation should be a single number shared between all activations (see section 3 of the paper). This is true for all network types. EDIT: Although the mean and standard deviation are each single numbers for all activations, the trainable gain and bias terms should be of the same size as the vector being normalized ignoring the batch size. The paper also notes that Batch Normalization works better than Layer Normalization for Convolutional Neural Networks (see section 6.7). Note that there is the possibility that new research has been released since the release of this paper (July 21, 2016) that I am unaware of. |
Are the implementations of Layer Normalization and Instance Normalization identical? |
@wandering007 They are very similar, but have some subtle differences. IN is applied on each channel of channeled data like images, but LN is usually applied on entire sample and often in NLP tasks. Also LN applies elementwise affine transform, while IN usually don't apply affine transform. PyTorch does support IN with a scalar affine transform applied to each channel, but that's mostly coming from that IN is implemented with BN code. |
@ssnl Thanks for your patient explanation. So if I want to use LN in the context of channeled data like images, I should use IN (with affine=True) instead? e.g. ops are convolution instead of linear. |
You can use both on images. They just work differently. Depending on your design and purpose of the work, I can see either (and with either affine on or off) work. |
@ssnl your explanation here was super helpful to me. I would suggest to add a bit more documentation in the IN layer description to clarify this. |
…3651cd (pytorch#38203) Summary: Pull Request resolved: pytorch#38203 Previous import was 79a7e0df7e86e0f32e7a05f563b24a566540c18b Included changes: - **[20b3e10e](onnx/onnx@20b3e10e)**: Add 'ignore_index' input in the spec for SoftmaxCrossEntropyLoss and NLLLoss. (pytorch#2680) <M. Zeeshan Siddiqui> - **[eae3eb8c](onnx/onnx@eae3eb8c)**: Use cmake GNUInstallDirs (pytorch#2661) <Gustavo Alvarez> - **[106821e9](onnx/onnx@106821e9)**: Update sequence test case so input is not scalar and splits are specified (pytorch#2675) <Scott McKay> - **[e094e101](onnx/onnx@e094e101)**: Remove unnecessary copies and std::move (pytorch#2684) <Changming Sun> - **[71145275](onnx/onnx@71145275)**: Update Batchnorm test (pytorch#2674) <Lara Haidar> - **[da13be2d](onnx/onnx@da13be2d)**: Rename OPTIONAL to OPTIONAL_VALUE (pytorch#2682) <Changming Sun> - **[2987fa06](onnx/onnx@2987fa06)**: Adding CI for ONNX Debug mode (Linux, OSX) (pytorch#2651) <Vinitra Swamy> - **[46fe392d](onnx/onnx@46fe392d)**: Update Pow input types in Opset 12 (pytorch#2666) <Lara Haidar> - **[ac1caf3b](onnx/onnx@ac1caf3b)**: Change type of label tensor to int32/int64 in SoftmaxCrossEntropyLoss spec. (pytorch#2667) <M. Zeeshan Siddiqui> - **[c2fefcbf](onnx/onnx@c2fefcbf)**: [Training] SG with Momentum Optimizer (pytorch#1959) <Wei-Sheng Chin> - **[8d15705e](onnx/onnx@8d15705e)**: [Training] Add Adagrad optimizer operator (pytorch#1955) <Wei-Sheng Chin> - **[94b01cdd](onnx/onnx@94b01cdd)**: Suppress a warning in unsqueeze (pytorch#2637) <Hong Xu> - **[0582d526](onnx/onnx@0582d526)**: Fix Greater/LessOrEqual function definition (pytorch#2645) <Takeshi Watanabe> - **[b852d819](onnx/onnx@b852d819)**: Increment version number to 1.7.0 (pytorch#2639) <Chin Huang> - **[ff4bb553](onnx/onnx@ff4bb553)**: Regenerate Min test data (pytorch#2644) <Takeshi Watanabe> Test Plan: ci Reviewed By: hl475 Differential Revision: D21493165 fbshipit-source-id: c5eb2f0143f1763166f46f118ec0985b41f759b3
…3651cd (#38203) Summary: Pull Request resolved: #38203 Previous import was 79a7e0df7e86e0f32e7a05f563b24a566540c18b Included changes: - **[20b3e10e](onnx/onnx@20b3e10e)**: Add 'ignore_index' input in the spec for SoftmaxCrossEntropyLoss and NLLLoss. (#2680) <M. Zeeshan Siddiqui> - **[eae3eb8c](onnx/onnx@eae3eb8c)**: Use cmake GNUInstallDirs (#2661) <Gustavo Alvarez> - **[106821e9](onnx/onnx@106821e9)**: Update sequence test case so input is not scalar and splits are specified (#2675) <Scott McKay> - **[e094e101](onnx/onnx@e094e101)**: Remove unnecessary copies and std::move (#2684) <Changming Sun> - **[71145275](onnx/onnx@71145275)**: Update Batchnorm test (#2674) <Lara Haidar> - **[da13be2d](onnx/onnx@da13be2d)**: Rename OPTIONAL to OPTIONAL_VALUE (#2682) <Changming Sun> - **[2987fa06](onnx/onnx@2987fa06)**: Adding CI for ONNX Debug mode (Linux, OSX) (#2651) <Vinitra Swamy> - **[46fe392d](onnx/onnx@46fe392d)**: Update Pow input types in Opset 12 (#2666) <Lara Haidar> - **[ac1caf3b](onnx/onnx@ac1caf3b)**: Change type of label tensor to int32/int64 in SoftmaxCrossEntropyLoss spec. (#2667) <M. Zeeshan Siddiqui> - **[c2fefcbf](onnx/onnx@c2fefcbf)**: [Training] SG with Momentum Optimizer (#1959) <Wei-Sheng Chin> - **[8d15705e](onnx/onnx@8d15705e)**: [Training] Add Adagrad optimizer operator (#1955) <Wei-Sheng Chin> - **[94b01cdd](onnx/onnx@94b01cdd)**: Suppress a warning in unsqueeze (#2637) <Hong Xu> - **[0582d526](onnx/onnx@0582d526)**: Fix Greater/LessOrEqual function definition (#2645) <Takeshi Watanabe> - **[b852d819](onnx/onnx@b852d819)**: Increment version number to 1.7.0 (#2639) <Chin Huang> - **[ff4bb553](onnx/onnx@ff4bb553)**: Regenerate Min test data (#2644) <Takeshi Watanabe> Test Plan: ci Reviewed By: hl475 Differential Revision: D21493165 fbshipit-source-id: 6863b289bfbf4235e36f0e2456ce44c776aaf164
…2385a6 Summary: Previous import was 79a7e0df7e86e0f32e7a05f563b24a566540c18b Included changes: - **[eae3eb8c](onnx/onnx@eae3eb8c)**: Use cmake GNUInstallDirs (pytorch#2661) <Gustavo Alvarez> - **[106821e9](onnx/onnx@106821e9)**: Update sequence test case so input is not scalar and splits are specified (pytorch#2675) <Scott McKay> - **[e094e101](onnx/onnx@e094e101)**: Remove unnecessary copies and std::move (pytorch#2684) <Changming Sun> - **[71145275](onnx/onnx@71145275)**: Update Batchnorm test (pytorch#2674) <Lara Haidar> - **[da13be2d](onnx/onnx@da13be2d)**: Rename OPTIONAL to OPTIONAL_VALUE (pytorch#2682) <Changming Sun> - **[2987fa06](onnx/onnx@2987fa06)**: Adding CI for ONNX Debug mode (Linux, OSX) (pytorch#2651) <Vinitra Swamy> - **[46fe392d](onnx/onnx@46fe392d)**: Update Pow input types in Opset 12 (pytorch#2666) <Lara Haidar> - **[ac1caf3b](onnx/onnx@ac1caf3b)**: Change type of label tensor to int32/int64 in SoftmaxCrossEntropyLoss spec. (pytorch#2667) <M. Zeeshan Siddiqui> - **[c2fefcbf](onnx/onnx@c2fefcbf)**: [Training] SG with Momentum Optimizer (pytorch#1959) <Wei-Sheng Chin> - **[8d15705e](onnx/onnx@8d15705e)**: [Training] Add Adagrad optimizer operator (pytorch#1955) <Wei-Sheng Chin> - **[94b01cdd](onnx/onnx@94b01cdd)**: Suppress a warning in unsqueeze (pytorch#2637) <Hong Xu> - **[0582d526](onnx/onnx@0582d526)**: Fix Greater/LessOrEqual function definition (pytorch#2645) <Takeshi Watanabe> - **[b852d819](onnx/onnx@b852d819)**: Increment version number to 1.7.0 (pytorch#2639) <Chin Huang> - **[ff4bb553](onnx/onnx@ff4bb553)**: Regenerate Min test data (pytorch#2644) <Takeshi Watanabe> Test Plan: ci Differential Revision: D21750299 fbshipit-source-id: ba52d96806b70fd4ff56a9743b5e56879ecf2b88
…2385a6 (#39089) Summary: Pull Request resolved: #39089 Previous import was 79a7e0df7e86e0f32e7a05f563b24a566540c18b Included changes: - **[eae3eb8c](onnx/onnx@eae3eb8c)**: Use cmake GNUInstallDirs (#2661) <Gustavo Alvarez> - **[106821e9](onnx/onnx@106821e9)**: Update sequence test case so input is not scalar and splits are specified (#2675) <Scott McKay> - **[e094e101](onnx/onnx@e094e101)**: Remove unnecessary copies and std::move (#2684) <Changming Sun> - **[71145275](onnx/onnx@71145275)**: Update Batchnorm test (#2674) <Lara Haidar> - **[da13be2d](onnx/onnx@da13be2d)**: Rename OPTIONAL to OPTIONAL_VALUE (#2682) <Changming Sun> - **[2987fa06](onnx/onnx@2987fa06)**: Adding CI for ONNX Debug mode (Linux, OSX) (#2651) <Vinitra Swamy> - **[46fe392d](onnx/onnx@46fe392d)**: Update Pow input types in Opset 12 (#2666) <Lara Haidar> - **[ac1caf3b](onnx/onnx@ac1caf3b)**: Change type of label tensor to int32/int64 in SoftmaxCrossEntropyLoss spec. (#2667) <M. Zeeshan Siddiqui> - **[c2fefcbf](onnx/onnx@c2fefcbf)**: [Training] SG with Momentum Optimizer (#1959) <Wei-Sheng Chin> - **[8d15705e](onnx/onnx@8d15705e)**: [Training] Add Adagrad optimizer operator (#1955) <Wei-Sheng Chin> - **[94b01cdd](onnx/onnx@94b01cdd)**: Suppress a warning in unsqueeze (#2637) <Hong Xu> - **[0582d526](onnx/onnx@0582d526)**: Fix Greater/LessOrEqual function definition (#2645) <Takeshi Watanabe> - **[b852d819](onnx/onnx@b852d819)**: Increment version number to 1.7.0 (#2639) <Chin Huang> - **[ff4bb553](onnx/onnx@ff4bb553)**: Regenerate Min test data (#2644) <Takeshi Watanabe> Test Plan: ci Reviewed By: hl475 Differential Revision: D21750299 fbshipit-source-id: c33ec1b1e0dc65d0187e78db96d749f9037aae9c
7117a7e patching nvfuser conv cudnn test numerics mismatch (#2048) 65af1a4 Inserting sync for redundant parallel types is already done at the (#2023) 6ac74d1 Fix sync map (#2047) f5bca33 Bank conflict checker improvements (#2032) d2ca7e3 Minor update on cp.async code generation. (#1901) d36cf61 Test file cleanup (#2040) 0b8e83f Allow non-root trivial reductions (#2037) a2dfe40 Fix vectorize size calculation (#2035) e040676 Use withPredicate to replace setPredicate to maintain Exprs immutable (#2025) 197221b removing ci workflow (#2034) 40e2703 Reduction rand like patch (#2031) bc77266 Add utility for checking bank conflict of shared memory (#2029) ddd1cf7 Add back FusionReductionWithTrivialReduction_CUDA (#2030) fbd97e5 Revert "Cleanup trivial reduction workarounds (#2006)" (#2024) bca20c1 Cleanup trivial reduction workarounds (#2006) e4b6585 Trivial forwarding (#1995) 1a0e355 Fix contiguity analysis of predicates to match updated contiguity. (#1991) a4effa6 Enable output allocation cache (#2010) 35440b7 Patching bn inference (#2016) 0f9f0b4 Add matmul benchmark (#2007) 45045cd Enable tests previously disabled due to an aliasing bug (#2005) 967aa77 Contiguous indexing for View operations (#1990) a43cb20 Make inlining even more modular (#2004) dc45835 Test util cleanup (#2003) 3ca21eb More strict validation (#2000) a7a7d57 Fix build problem (#1999) fc235b0 Just fixes comments (#1998) 482386c cleanup (#1997) 4cbe0db Improve divisible split detection (#1970) 42ccc52 Minor build fix. (#1996) fcf8c09 Cleanup of lower_utils.cpp: Isolate out GpuLower usage (#1989) 15f2f6d Move ConcretizedBroadcastDomains to shared_ptr in GpuLower. (#1988) 8f1c7f5 Minor cleanup lower_unroll.cpp (#1994) 1d9858c Minor cleanup (#1992) f262d9c Add support for uniform RNG (#1986) eb1dad1 Remove non-const functions, remove GpuLower instance on build, pass in ca_map. (#1987) 634820c Add support for some empty fusion (#1981) eabe8d8 Segment self mapping fusions (#1954) e96aacf Enable Transpose operation (#1882) 425dce2 Add a null scheduler that helps segmenting away no-op schedules (#1835) 306d4a6 Fix canScheduleCompileTime check of transpose scheduler (#1969) b1bd32c Minor fix (#1967) bd93578 Enable transpose scheduler (#1927) b7a206e Move scheduler vectorize utilities into their own file (#1959) d9420e4 View scheduling (#1928) c668e13 Upstream push ci fixes (#1965) c40202b Fix dump effective bandwidth (#1962) 93505bc WAR on index mapping when exact and permissive maps differ (#1960) 45e95fd Allow splitting inner-most ID to create virtual innermost ID in transpose scheduler (#1930) a3ecb33 Improve the comments at the beginning of index_compute.h (#1946) f7bc341 Remove unused variables (#1955) df3393a Some cleanup (#1957) 7d1d7c8 TVDomainGuard factory (#1953) 357ba22 Fill allocation with nan on tests (#1956) 8eafc54 Fix detection of unmappable root domains (#1952) 90a51f2 Some indexing cleanups, Add eye support (#1940) ddc01e4 Exclude unsupported data types (#1951) 992e17c test the groups the same order as they are merged (#1949) 208262b Move detection of self mapping IDs to IterDomainGraph from (#1941) ac4de38 Merge pull request #1945 from csarofeen/master_merge_0828 6310948 Add full, full_like, zeros, zeros_like, ones, ones_like (#1943) aab10bc Merge remote-tracking branch 'upstream/viable/strict' into HEAD 4c254c0 Fix arange when step is negative (#1942) 89330aa Tensor factories must set the output shape as its input (#1939) [ghstack-poisoned]
7117a7e patching nvfuser conv cudnn test numerics mismatch (#2048) 65af1a4 Inserting sync for redundant parallel types is already done at the (#2023) 6ac74d1 Fix sync map (#2047) f5bca33 Bank conflict checker improvements (#2032) d2ca7e3 Minor update on cp.async code generation. (#1901) d36cf61 Test file cleanup (#2040) 0b8e83f Allow non-root trivial reductions (#2037) a2dfe40 Fix vectorize size calculation (#2035) e040676 Use withPredicate to replace setPredicate to maintain Exprs immutable (#2025) 197221b removing ci workflow (#2034) 40e2703 Reduction rand like patch (#2031) bc77266 Add utility for checking bank conflict of shared memory (#2029) ddd1cf7 Add back FusionReductionWithTrivialReduction_CUDA (#2030) fbd97e5 Revert "Cleanup trivial reduction workarounds (#2006)" (#2024) bca20c1 Cleanup trivial reduction workarounds (#2006) e4b6585 Trivial forwarding (#1995) 1a0e355 Fix contiguity analysis of predicates to match updated contiguity. (#1991) a4effa6 Enable output allocation cache (#2010) 35440b7 Patching bn inference (#2016) 0f9f0b4 Add matmul benchmark (#2007) 45045cd Enable tests previously disabled due to an aliasing bug (#2005) 967aa77 Contiguous indexing for View operations (#1990) a43cb20 Make inlining even more modular (#2004) dc45835 Test util cleanup (#2003) 3ca21eb More strict validation (#2000) a7a7d57 Fix build problem (#1999) fc235b0 Just fixes comments (#1998) 482386c cleanup (#1997) 4cbe0db Improve divisible split detection (#1970) 42ccc52 Minor build fix. (#1996) fcf8c09 Cleanup of lower_utils.cpp: Isolate out GpuLower usage (#1989) 15f2f6d Move ConcretizedBroadcastDomains to shared_ptr in GpuLower. (#1988) 8f1c7f5 Minor cleanup lower_unroll.cpp (#1994) 1d9858c Minor cleanup (#1992) f262d9c Add support for uniform RNG (#1986) eb1dad1 Remove non-const functions, remove GpuLower instance on build, pass in ca_map. (#1987) 634820c Add support for some empty fusion (#1981) eabe8d8 Segment self mapping fusions (#1954) e96aacf Enable Transpose operation (#1882) 425dce2 Add a null scheduler that helps segmenting away no-op schedules (#1835) 306d4a6 Fix canScheduleCompileTime check of transpose scheduler (#1969) b1bd32c Minor fix (#1967) bd93578 Enable transpose scheduler (#1927) b7a206e Move scheduler vectorize utilities into their own file (#1959) d9420e4 View scheduling (#1928) c668e13 Upstream push ci fixes (#1965) c40202b Fix dump effective bandwidth (#1962) 93505bc WAR on index mapping when exact and permissive maps differ (#1960) 45e95fd Allow splitting inner-most ID to create virtual innermost ID in transpose scheduler (#1930) a3ecb33 Improve the comments at the beginning of index_compute.h (#1946) f7bc341 Remove unused variables (#1955) df3393a Some cleanup (#1957) 7d1d7c8 TVDomainGuard factory (#1953) 357ba22 Fill allocation with nan on tests (#1956) 8eafc54 Fix detection of unmappable root domains (#1952) 90a51f2 Some indexing cleanups, Add eye support (#1940) ddc01e4 Exclude unsupported data types (#1951) 992e17c test the groups the same order as they are merged (#1949) 208262b Move detection of self mapping IDs to IterDomainGraph from (#1941) ac4de38 Merge pull request #1945 from csarofeen/master_merge_0828 6310948 Add full, full_like, zeros, zeros_like, ones, ones_like (#1943) aab10bc Merge remote-tracking branch 'upstream/viable/strict' into HEAD 4c254c0 Fix arange when step is negative (#1942) 89330aa Tensor factories must set the output shape as its input (#1939) ghstack-source-id: 7ebfc36 Pull Request resolved: #87779
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/ Codegen changes include (TBD): Commits that's in this PR from the devel branch: ``` 7117a7e patching nvfuser conv cudnn test numerics mismatch (#2048) 65af1a4 Inserting sync for redundant parallel types is already done at the (#2023) 6ac74d1 Fix sync map (#2047) f5bca33 Bank conflict checker improvements (#2032) d2ca7e3 Minor update on cp.async code generation. (#1901) d36cf61 Test file cleanup (#2040) 0b8e83f Allow non-root trivial reductions (#2037) a2dfe40 Fix vectorize size calculation (#2035) e040676 Use withPredicate to replace setPredicate to maintain Exprs immutable (#2025) 197221b removing ci workflow (#2034) 40e2703 Reduction rand like patch (#2031) bc77266 Add utility for checking bank conflict of shared memory (#2029) ddd1cf7 Add back FusionReductionWithTrivialReduction_CUDA (#2030) fbd97e5 Revert "Cleanup trivial reduction workarounds (#2006)" (#2024) bca20c1 Cleanup trivial reduction workarounds (#2006) e4b6585 Trivial forwarding (#1995) 1a0e355 Fix contiguity analysis of predicates to match updated contiguity. (#1991) a4effa6 Enable output allocation cache (#2010) 35440b7 Patching bn inference (#2016) 0f9f0b4 Add matmul benchmark (#2007) 45045cd Enable tests previously disabled due to an aliasing bug (#2005) 967aa77 Contiguous indexing for View operations (#1990) a43cb20 Make inlining even more modular (#2004) dc45835 Test util cleanup (#2003) 3ca21eb More strict validation (#2000) a7a7d57 Fix build problem (#1999) fc235b0 Just fixes comments (#1998) 482386c cleanup (#1997) 4cbe0db Improve divisible split detection (#1970) 42ccc52 Minor build fix. (#1996) fcf8c09 Cleanup of lower_utils.cpp: Isolate out GpuLower usage (#1989) 15f2f6d Move ConcretizedBroadcastDomains to shared_ptr in GpuLower. (#1988) 8f1c7f5 Minor cleanup lower_unroll.cpp (#1994) 1d9858c Minor cleanup (#1992) f262d9c Add support for uniform RNG (#1986) eb1dad1 Remove non-const functions, remove GpuLower instance on build, pass in ca_map. (#1987) 634820c Add support for some empty fusion (#1981) eabe8d8 Segment self mapping fusions (#1954) e96aacf Enable Transpose operation (#1882) 425dce2 Add a null scheduler that helps segmenting away no-op schedules (#1835) 306d4a6 Fix canScheduleCompileTime check of transpose scheduler (#1969) b1bd32c Minor fix (#1967) bd93578 Enable transpose scheduler (#1927) b7a206e Move scheduler vectorize utilities into their own file (#1959) d9420e4 View scheduling (#1928) c668e13 Upstream push ci fixes (#1965) c40202b Fix dump effective bandwidth (#1962) 93505bc WAR on index mapping when exact and permissive maps differ (#1960) 45e95fd Allow splitting inner-most ID to create virtual innermost ID in transpose scheduler (#1930) a3ecb33 Improve the comments at the beginning of index_compute.h (#1946) f7bc341 Remove unused variables (#1955) df3393a Some cleanup (#1957) 7d1d7c8 TVDomainGuard factory (#1953) 357ba22 Fill allocation with nan on tests (#1956) 8eafc54 Fix detection of unmappable root domains (#1952) 90a51f2 Some indexing cleanups, Add eye support (#1940) ddc01e4 Exclude unsupported data types (#1951) 992e17c test the groups the same order as they are merged (#1949) 208262b Move detection of self mapping IDs to IterDomainGraph from (#1941) ac4de38 Merge pull request #1945 from csarofeen/master_merge_0828 6310948 Add full, full_like, zeros, zeros_like, ones, ones_like (#1943) aab10bc Merge remote-tracking branch 'upstream/viable/strict' into HEAD 4c254c0 Fix arange when step is negative (#1942) 89330aa Tensor factories must set the output shape as its input (#1939) ``` [ghstack-poisoned]
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/ Codegen changes include (TBD): Commits that's in this PR from the devel branch: ``` 7117a7e patching nvfuser conv cudnn test numerics mismatch (#2048) 65af1a4 Inserting sync for redundant parallel types is already done at the (#2023) 6ac74d1 Fix sync map (#2047) f5bca33 Bank conflict checker improvements (#2032) d2ca7e3 Minor update on cp.async code generation. (#1901) d36cf61 Test file cleanup (#2040) 0b8e83f Allow non-root trivial reductions (#2037) a2dfe40 Fix vectorize size calculation (#2035) e040676 Use withPredicate to replace setPredicate to maintain Exprs immutable (#2025) 197221b removing ci workflow (#2034) 40e2703 Reduction rand like patch (#2031) bc77266 Add utility for checking bank conflict of shared memory (#2029) ddd1cf7 Add back FusionReductionWithTrivialReduction_CUDA (#2030) fbd97e5 Revert "Cleanup trivial reduction workarounds (#2006)" (#2024) bca20c1 Cleanup trivial reduction workarounds (#2006) e4b6585 Trivial forwarding (#1995) 1a0e355 Fix contiguity analysis of predicates to match updated contiguity. (#1991) a4effa6 Enable output allocation cache (#2010) 35440b7 Patching bn inference (#2016) 0f9f0b4 Add matmul benchmark (#2007) 45045cd Enable tests previously disabled due to an aliasing bug (#2005) 967aa77 Contiguous indexing for View operations (#1990) a43cb20 Make inlining even more modular (#2004) dc45835 Test util cleanup (#2003) 3ca21eb More strict validation (#2000) a7a7d57 Fix build problem (#1999) fc235b0 Just fixes comments (#1998) 482386c cleanup (#1997) 4cbe0db Improve divisible split detection (#1970) 42ccc52 Minor build fix. (#1996) fcf8c09 Cleanup of lower_utils.cpp: Isolate out GpuLower usage (#1989) 15f2f6d Move ConcretizedBroadcastDomains to shared_ptr in GpuLower. (#1988) 8f1c7f5 Minor cleanup lower_unroll.cpp (#1994) 1d9858c Minor cleanup (#1992) f262d9c Add support for uniform RNG (#1986) eb1dad1 Remove non-const functions, remove GpuLower instance on build, pass in ca_map. (#1987) 634820c Add support for some empty fusion (#1981) eabe8d8 Segment self mapping fusions (#1954) e96aacf Enable Transpose operation (#1882) 425dce2 Add a null scheduler that helps segmenting away no-op schedules (#1835) 306d4a6 Fix canScheduleCompileTime check of transpose scheduler (#1969) b1bd32c Minor fix (#1967) bd93578 Enable transpose scheduler (#1927) b7a206e Move scheduler vectorize utilities into their own file (#1959) d9420e4 View scheduling (#1928) c668e13 Upstream push ci fixes (#1965) c40202b Fix dump effective bandwidth (#1962) 93505bc WAR on index mapping when exact and permissive maps differ (#1960) 45e95fd Allow splitting inner-most ID to create virtual innermost ID in transpose scheduler (#1930) a3ecb33 Improve the comments at the beginning of index_compute.h (#1946) f7bc341 Remove unused variables (#1955) df3393a Some cleanup (#1957) 7d1d7c8 TVDomainGuard factory (#1953) 357ba22 Fill allocation with nan on tests (#1956) 8eafc54 Fix detection of unmappable root domains (#1952) 90a51f2 Some indexing cleanups, Add eye support (#1940) ddc01e4 Exclude unsupported data types (#1951) 992e17c test the groups the same order as they are merged (#1949) 208262b Move detection of self mapping IDs to IterDomainGraph from (#1941) ac4de38 Merge pull request #1945 from csarofeen/master_merge_0828 6310948 Add full, full_like, zeros, zeros_like, ones, ones_like (#1943) aab10bc Merge remote-tracking branch 'upstream/viable/strict' into HEAD 4c254c0 Fix arange when step is negative (#1942) 89330aa Tensor factories must set the output shape as its input (#1939) ``` [ghstack-poisoned]
7117a7e patching nvfuser conv cudnn test numerics mismatch (#2048) 65af1a4 Inserting sync for redundant parallel types is already done at the (#2023) 6ac74d1 Fix sync map (#2047) f5bca33 Bank conflict checker improvements (#2032) d2ca7e3 Minor update on cp.async code generation. (#1901) d36cf61 Test file cleanup (#2040) 0b8e83f Allow non-root trivial reductions (#2037) a2dfe40 Fix vectorize size calculation (#2035) e040676 Use withPredicate to replace setPredicate to maintain Exprs immutable (#2025) 197221b removing ci workflow (#2034) 40e2703 Reduction rand like patch (#2031) bc77266 Add utility for checking bank conflict of shared memory (#2029) ddd1cf7 Add back FusionReductionWithTrivialReduction_CUDA (#2030) fbd97e5 Revert "Cleanup trivial reduction workarounds (#2006)" (#2024) bca20c1 Cleanup trivial reduction workarounds (#2006) e4b6585 Trivial forwarding (#1995) 1a0e355 Fix contiguity analysis of predicates to match updated contiguity. (#1991) a4effa6 Enable output allocation cache (#2010) 35440b7 Patching bn inference (#2016) 0f9f0b4 Add matmul benchmark (#2007) 45045cd Enable tests previously disabled due to an aliasing bug (#2005) 967aa77 Contiguous indexing for View operations (#1990) a43cb20 Make inlining even more modular (#2004) dc45835 Test util cleanup (#2003) 3ca21eb More strict validation (#2000) a7a7d57 Fix build problem (#1999) fc235b0 Just fixes comments (#1998) 482386c cleanup (#1997) 4cbe0db Improve divisible split detection (#1970) 42ccc52 Minor build fix. (#1996) fcf8c09 Cleanup of lower_utils.cpp: Isolate out GpuLower usage (#1989) 15f2f6d Move ConcretizedBroadcastDomains to shared_ptr in GpuLower. (#1988) 8f1c7f5 Minor cleanup lower_unroll.cpp (#1994) 1d9858c Minor cleanup (#1992) f262d9c Add support for uniform RNG (#1986) eb1dad1 Remove non-const functions, remove GpuLower instance on build, pass in ca_map. (#1987) 634820c Add support for some empty fusion (#1981) eabe8d8 Segment self mapping fusions (#1954) e96aacf Enable Transpose operation (#1882) 425dce2 Add a null scheduler that helps segmenting away no-op schedules (#1835) 306d4a6 Fix canScheduleCompileTime check of transpose scheduler (#1969) b1bd32c Minor fix (#1967) bd93578 Enable transpose scheduler (#1927) b7a206e Move scheduler vectorize utilities into their own file (#1959) d9420e4 View scheduling (#1928) c668e13 Upstream push ci fixes (#1965) c40202b Fix dump effective bandwidth (#1962) 93505bc WAR on index mapping when exact and permissive maps differ (#1960) 45e95fd Allow splitting inner-most ID to create virtual innermost ID in transpose scheduler (#1930) a3ecb33 Improve the comments at the beginning of index_compute.h (#1946) f7bc341 Remove unused variables (#1955) df3393a Some cleanup (#1957) 7d1d7c8 TVDomainGuard factory (#1953) 357ba22 Fill allocation with nan on tests (#1956) 8eafc54 Fix detection of unmappable root domains (#1952) 90a51f2 Some indexing cleanups, Add eye support (#1940) ddc01e4 Exclude unsupported data types (#1951) 992e17c test the groups the same order as they are merged (#1949) 208262b Move detection of self mapping IDs to IterDomainGraph from (#1941) ac4de38 Merge pull request #1945 from csarofeen/master_merge_0828 6310948 Add full, full_like, zeros, zeros_like, ones, ones_like (#1943) aab10bc Merge remote-tracking branch 'upstream/viable/strict' into HEAD 4c254c0 Fix arange when step is negative (#1942) 89330aa Tensor factories must set the output shape as its input (#1939) ghstack-source-id: f05ebc3 Pull Request resolved: #87779
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/ Codegen changes include (TBD): Commits that's in this PR from the devel branch: ``` 7117a7e patching nvfuser conv cudnn test numerics mismatch (#2048) 65af1a4 Inserting sync for redundant parallel types is already done at the (#2023) 6ac74d1 Fix sync map (#2047) f5bca33 Bank conflict checker improvements (#2032) d2ca7e3 Minor update on cp.async code generation. (#1901) d36cf61 Test file cleanup (#2040) 0b8e83f Allow non-root trivial reductions (#2037) a2dfe40 Fix vectorize size calculation (#2035) e040676 Use withPredicate to replace setPredicate to maintain Exprs immutable (#2025) 197221b removing ci workflow (#2034) 40e2703 Reduction rand like patch (#2031) bc77266 Add utility for checking bank conflict of shared memory (#2029) ddd1cf7 Add back FusionReductionWithTrivialReduction_CUDA (#2030) fbd97e5 Revert "Cleanup trivial reduction workarounds (#2006)" (#2024) bca20c1 Cleanup trivial reduction workarounds (#2006) e4b6585 Trivial forwarding (#1995) 1a0e355 Fix contiguity analysis of predicates to match updated contiguity. (#1991) a4effa6 Enable output allocation cache (#2010) 35440b7 Patching bn inference (#2016) 0f9f0b4 Add matmul benchmark (#2007) 45045cd Enable tests previously disabled due to an aliasing bug (#2005) 967aa77 Contiguous indexing for View operations (#1990) a43cb20 Make inlining even more modular (#2004) dc45835 Test util cleanup (#2003) 3ca21eb More strict validation (#2000) a7a7d57 Fix build problem (#1999) fc235b0 Just fixes comments (#1998) 482386c cleanup (#1997) 4cbe0db Improve divisible split detection (#1970) 42ccc52 Minor build fix. (#1996) fcf8c09 Cleanup of lower_utils.cpp: Isolate out GpuLower usage (#1989) 15f2f6d Move ConcretizedBroadcastDomains to shared_ptr in GpuLower. (#1988) 8f1c7f5 Minor cleanup lower_unroll.cpp (#1994) 1d9858c Minor cleanup (#1992) f262d9c Add support for uniform RNG (#1986) eb1dad1 Remove non-const functions, remove GpuLower instance on build, pass in ca_map. (#1987) 634820c Add support for some empty fusion (#1981) eabe8d8 Segment self mapping fusions (#1954) e96aacf Enable Transpose operation (#1882) 425dce2 Add a null scheduler that helps segmenting away no-op schedules (#1835) 306d4a6 Fix canScheduleCompileTime check of transpose scheduler (#1969) b1bd32c Minor fix (#1967) bd93578 Enable transpose scheduler (#1927) b7a206e Move scheduler vectorize utilities into their own file (#1959) d9420e4 View scheduling (#1928) c668e13 Upstream push ci fixes (#1965) c40202b Fix dump effective bandwidth (#1962) 93505bc WAR on index mapping when exact and permissive maps differ (#1960) 45e95fd Allow splitting inner-most ID to create virtual innermost ID in transpose scheduler (#1930) a3ecb33 Improve the comments at the beginning of index_compute.h (#1946) f7bc341 Remove unused variables (#1955) df3393a Some cleanup (#1957) 7d1d7c8 TVDomainGuard factory (#1953) 357ba22 Fill allocation with nan on tests (#1956) 8eafc54 Fix detection of unmappable root domains (#1952) 90a51f2 Some indexing cleanups, Add eye support (#1940) ddc01e4 Exclude unsupported data types (#1951) 992e17c test the 10000 groups the same order as they are merged (#1949) 208262b Move detection of self mapping IDs to IterDomainGraph from (#1941) ac4de38 Merge pull request #1945 from csarofeen/master_merge_0828 6310948 Add full, full_like, zeros, zeros_like, ones, ones_like (#1943) aab10bc Merge remote-tracking branch 'upstream/viable/strict' into HEAD 4c254c0 Fix arange when step is negative (#1942) 89330aa Tensor factories must set the output shape as its input (#1939) ``` [ghstack-poisoned]
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/ Codegen changes include: * codegen improvement: i. allow non-root trivial reductions, allow empty/no-op fusion ii. fixes vectorization checks and size calculation iii. bank conflict handle improvement iv. enables transpose scheduler * misc: i. CI tests failure fixes ii. cpp tests file clean up iii. trivial forwarding supports added in codegen runtime iv. added factory methods support in codegen Commits that's in this PR from the devel branch: ``` 7117a7e patching nvfuser conv cudnn test numerics mismatch (#2048) 65af1a4 Inserting sync for redundant parallel types is already done at the (#2023) 6ac74d1 Fix sync map (#2047) f5bca33 Bank conflict checker improvements (#2032) d2ca7e3 Minor update on cp.async code generation. (#1901) d36cf61 Test file cleanup (#2040) 0b8e83f Allow non-root trivial reductions (#2037) a2dfe40 Fix vectorize size calculation (#2035) e040676 Use withPredicate to replace setPredicate to maintain Exprs immutable (#2025) 197221b removing ci workflow (#2034) 40e2703 Reduction rand like patch (#2031) bc77266 Add utility for checking bank conflict of shared memory (#2029) ddd1cf7 Add back FusionReductionWithTrivialReduction_CUDA (#2030) fbd97e5 Revert "Cleanup trivial reduction workarounds (#2006)" (#2024) bca20c1 Cleanup trivial reduction workarounds (#2006) e4b6585 Trivial forwarding (#1995) 1a0e355 Fix contiguity analysis of predicates to match updated contiguity. (#1991) a4effa6 Enable output allocation cache (#2010) 35440b7 Patching bn inference (#2016) 0f9f0b4 Add matmul benchmark (#2007) 45045cd Enable tests previously disabled due to an aliasing bug (#2005) 967aa77 Contiguous indexing for View operations (#1990) a43cb20 Make inlining even more modular (#2004) dc45835 Test util cleanup (#2003) 3ca21eb More strict validation (#2000) a7a7d57 Fix build problem (#1999) fc235b0 Just fixes comments (#1998) 482386c cleanup (#1997) 4cbe0db Improve divisible split detection (#1970) 42ccc52 Minor build fix. (#1996) fcf8c09 Cleanup of lower_utils.cpp: Isolate out GpuLower usage (#1989) 15f2f6d Move ConcretizedBroadcastDomains to shared_ptr in GpuLower. (#1988) 8f1c7f5 Minor cleanup lower_unroll.cpp (#1994) 1d9858c Minor cleanup (#1992) f262d9c Add support for uniform RNG (#1986) eb1dad1 Remove non-const functions, remove GpuLower instance on build, pass in ca_map. (#1987) 634820c Add support for some empty fusion (#1981) eabe8d8 Segment self mapping fusions (#1954) e96aacf Enable Transpose operation (#1882) 425dce2 Add a null scheduler that helps segmenting away no-op schedules (#1835) 306d4a6 Fix canScheduleCompileTime check of transpose scheduler (#1969) b1bd32c Minor fix (#1967) bd93578 Enable transpose scheduler (#1927) b7a206e Move scheduler vectorize utilities into their own file (#1959) d9420e4 View scheduling (#1928) c668e13 Upstream push ci fixes (#1965) c40202b Fix dump effective bandwidth (#1962) 93505bc WAR on index mapping when exact and permissive maps differ (#1960) 45e95fd Allow splitting inner-most ID to create virtual innermost ID in transpose scheduler (#1930) a3ecb33 Improve the comments at the beginning of index_compute.h (#1946) f7bc341 Remove unused variables (#1955) df3393a Some cleanup (#1957) 7d1d7c8 TVDomainGuard factory (#1953) 357ba22 Fill allocation with nan on tests (#1956) 8eafc54 Fix detection of unmappable root domains (#1952) 90a51f2 Some indexing cleanups, Add eye support (#1940) ddc01e4 Exclude unsupported data types (#1951) 992e17c test the groups the same order as they are merged (#1949) 208262b Move detection of self mapping IDs to IterDomainGraph from (#1941) ac4de38 Merge pull request #1945 from csarofeen/master_merge_0828 6310948 Add full, full_like, zeros, zeros_like, ones, ones_like (#1943) aab10bc Merge remote-tracking branch 'upstream/viable/strict' into HEAD 4c254c0 Fix arange when step is negative (#1942) 89330aa Tensor factories must set the output shape as its input (#1939) ``` RUN_TORCHBENCH: nvfuser Differential Revision: [D40869846](https://our.internmc.facebook.com/intern/diff/D40869846) [ghstack-poisoned]
7117a7e patching nvfuser conv cudnn test numerics mismatch (#2048) 65af1a4 Inserting sync for redundant parallel types is already done at the (#2023) 6ac74d1 Fix sync map (#2047) f5bca33 Bank conflict checker improvements (#2032) d2ca7e3 Minor update on cp.async code generation. (#1901) d36cf61 Test file cleanup (#2040) 0b8e83f Allow non-root trivial reductions (#2037) a2dfe40 Fix vectorize size calculation (#2035) e040676 Use withPredicate to replace setPredicate to maintain Exprs immutable (#2025) 197221b removing ci workflow (#2034) 40e2703 Reduction rand like patch (#2031) bc77266 Add utility for checking bank conflict of shared memory (#2029) ddd1cf7 Add back FusionReductionWithTrivialReduction_CUDA (#2030) fbd97e5 Revert "Cleanup trivial reduction workarounds (#2006)" (#2024) bca20c1 Cleanup trivial reduction workarounds (#2006) e4b6585 Trivial forwarding (#1995) 1a0e355 Fix contiguity analysis of predicates to match updated contiguity. (#1991) a4effa6 Enable output allocation cache (#2010) 35440b7 Patching bn inference (#2016) 0f9f0b4 Add matmul benchmark (#2007) 45045cd Enable tests previously disabled due to an aliasing bug (#2005) 967aa77 Contiguous indexing for View operations (#1990) a43cb20 Make inlining even more modular (#2004) dc45835 Test util cleanup (#2003) 3ca21eb More strict validation (#2000) a7a7d57 Fix build problem (#1999) fc235b0 Just fixes comments (#1998) 482386c cleanup (#1997) 4cbe0db Improve divisible split detection (#1970) 42ccc52 Minor build fix. (#1996) fcf8c09 Cleanup of lower_utils.cpp: Isolate out GpuLower usage (#1989) 15f2f6d Move ConcretizedBroadcastDomains to shared_ptr in GpuLower. (#1988) 8f1c7f5 Minor cleanup lower_unroll.cpp (#1994) 1d9858c Minor cleanup (#1992) f262d9c Add support for uniform RNG (#1986) eb1dad1 Remove non-const functions, remove GpuLower instance on build, pass in ca_map. (#1987) 634820c Add support for some empty fusion (#1981) eabe8d8 Segment self mapping fusions (#1954) e96aacf Enable Transpose operation (#1882) 425dce2 Add a null scheduler that helps segmenting away no-op schedules (#1835) 306d4a6 Fix canScheduleCompileTime check of transpose scheduler (#1969) b1bd32c Minor fix (#1967) bd93578 Enable transpose scheduler (#1927) b7a206e Move scheduler vectorize utilities into their own file (#1959) d9420e4 View scheduling (#1928) c668e13 Upstream push ci fixes (#1965) c40202b Fix dump effective bandwidth (#1962) 93505bc WAR on index mapping when exact and permissive maps differ (#1960) 45e95fd Allow splitting inner-most ID to create virtual innermost ID in transpose scheduler (#1930) a3ecb33 Improve the comments at the beginning of index_compute.h (#1946) f7bc341 Remove unused variables (#1955) df3393a Some cleanup (#1957) 7d1d7c8 TVDomainGuard factory (#1953) 357ba22 Fill allocation with nan on tests (#1956) 8eafc54 Fix detection of unmappable root domains (#1952) 90a51f2 Some indexing cleanups, Add eye support (#1940) ddc01e4 Exclude unsupported data types (#1951) 992e17c test the groups the same order as they are merged (#1949) 208262b Move detection of self mapping IDs to IterDomainGraph from (#1941) ac4de38 Merge pull request #1945 from csarofeen/master_merge_0828 6310948 Add full, full_like, zeros, zeros_like, ones, ones_like (#1943) aab10bc Merge remote-tracking branch 'upstream/viable/strict' into HEAD 4c254c0 Fix arange when step is negative (#1942) 89330aa Tensor factories must set the output shape as its input (#1939) RUN_TORCHBENCH: nvfuser ghstack-source-id: 3b21376 Pull Request resolved: #87779
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/ Codegen changes include: * codegen improvement: i. allow non-root trivial reductions, allow empty/no-op fusion ii. fixes vectorization checks and size calculation iii. bank conflict handle improvement iv. enables transpose scheduler * misc: i. CI tests failure fixes ii. cpp tests file clean up iii. trivial forwarding supports added in codegen runtime iv. added factory methods support in codegen Commits that's in this PR from the devel branch: ``` 7117a7e patching nvfuser conv cudnn test numerics mismatch (#2048) 65af1a4 Inserting sync for redundant parallel types is already done at the (#2023) 6ac74d1 Fix sync map (#2047) f5bca33 Bank conflict checker improvements (#2032) d2ca7e3 Minor update on cp.async code generation. (#1901) d36cf61 Test file cleanup (#2040) 0b8e83f Allow non-root trivial reductions (#2037) a2dfe40 Fix vectorize size calculation (#2035) e040676 Use withPredicate to replace setPredicate to maintain Exprs immutable (#2025) 197221b removing ci workflow (#2034) 40e2703 Reduction rand like patch (#2031) bc77266 Add utility for checking bank conflict of shared memory (#2029) ddd1cf7 Add back FusionReductionWithTrivialReduction_CUDA (#2030) fbd97e5 Revert "Cleanup trivial reduction workarounds (#2006)" (#2024) bca20c1 Cleanup trivial reduction workarounds (#2006) e4b6585 Trivial forwarding (#1995) 1a0e355 Fix contiguity analysis of predicates to match updated contiguity. (#1991) a4effa6 Enable output allocation cache (#2010) 35440b7 Patching bn inference (#2016) 0f9f0b4 Add matmul benchmark (#2007) 45045cd Enable tests previously disabled due to an aliasing bug (#2005) 967aa77 Contiguous indexing for View operations (#1990) a43cb20 Make inlining even more modular (#2004) dc45835 Test util cleanup (#2003) 3ca21eb More strict validation (#2000) a7a7d57 Fix build problem (#1999) fc235b0 Just fixes comments (#1998) 482386c cleanup (#1997) 4cbe0db Improve divisible split detection (#1970) 42ccc52 Minor build fix. (#1996) fcf8c09 Cleanup of lower_utils.cpp: Isolate out GpuLower usage (#1989) 15f2f6d Move ConcretizedBroadcastDomains to shared_ptr in GpuLower. (#1988) 8f1c7f5 Minor cleanup lower_unroll.cpp (#1994) 1d9858c Minor cleanup (#1992) f262d9c Add support for uniform RNG (#1986) eb1dad1 Remove non-const functions, remove GpuLower instance on build, pass in ca_map. (#1987) 634820c Add support for some empty fusion (#1981) eabe8d8 Segment self mapping fusions (#1954) e96aacf Enable Transpose operation (#1882) 425dce2 Add a null scheduler that helps segmenting away no-op schedules (#1835) 306d4a6 Fix canScheduleCompileTime check of transpose scheduler (#1969) b1bd32c Minor fix (#1967) bd93578 Enable transpose scheduler (#1927) b7a206e Move scheduler vectorize utilities into their own file (#1959) d9420e4 View scheduling (#1928) c668e13 Upstream push ci fixes (#1965) c40202b Fix dump effective bandwidth (#1962) 93505bc WAR on index mapping when exact and permissive maps differ (#1960) 45e95fd Allow splitting inner-most ID to create virtual innermost ID in transpose scheduler (#1930) a3ecb33 Improve the comments at the beginning of index_compute.h (#1946) f7bc341 Remove unused variables (#1955) df3393a Some cleanup (#1957) 7d1d7c8 TVDomainGuard factory (#1953) 357ba22 Fill allocation with nan on tests (#1956) 8eafc54 Fix detection of unmappable root domains (#1952) 90a51f2 Some indexing cleanups, Add eye support (#1940) ddc01e4 Exclude unsupported data types (#1951) 992e17c test the groups the same order as they are merged (#1949) 208262b Move detection of self mapping IDs to IterDomainGraph from (#1941) ac4de38 Merge pull request #1945 from csarofeen/master_merge_0828 6310948 Add full, full_like, zeros, zeros_like, ones, ones_like (#1943) aab10bc Merge remote-tracking branch 'upstream/viable/strict' into HEAD 4c254c0 Fix arange when step is negative (#1942) 89330aa Tensor factories must set the output shape as its input (#1939) ``` RUN_TORCHBENCH: nvfuser Differential Revision: [D40869846](https://our.internmc.facebook.com/intern/diff/D40869846) [ghstack-poisoned]
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/ Codegen changes include: * codegen improvement: i. allow non-root trivial reductions, allow empty/no-op fusion ii. fixes vectorization checks and size calculation iii. bank conflict handle improvement iv. enables transpose scheduler * misc: i. CI tests failure fixes ii. cpp tests file clean up iii. trivial forwarding supports added in codegen runtime iv. added factory methods support in codegen Commits that's in this PR from the devel branch: ``` 7117a7e patching nvfuser conv cudnn test numerics mismatch (#2048) 65af1a4 Inserting sync for redundant parallel types is already done at the (#2023) 6ac74d1 Fix sync map (#2047) f5bca33 Bank conflict checker improvements (#2032) d2ca7e3 Minor update on cp.async code generation. (#1901) d36cf61 Test file cleanup (#2040) 0b8e83f Allow non-root trivial reductions (#2037) a2dfe40 Fix vectorize size calculation (#2035) e040676 Use withPredicate to replace setPredicate to maintain Exprs immutable (#2025) 197221b removing ci workflow (#2034) 40e2703 Reduction rand like patch (#2031) bc77266 Add utility for checking bank conflict of shared memory (#2029) ddd1cf7 Add back FusionReductionWithTrivialReduction_CUDA (#2030) fbd97e5 Revert "Cleanup trivial reduction workarounds (#2006)" (#2024) bca20c1 Cleanup trivial reduction workarounds (#2006) e4b6585 Trivial forwarding (#1995) 1a0e355 Fix contiguity analysis of predicates to match updated contiguity. (#1991) a4effa6 Enable output allocation cache (#2010) 35440b7 Patching bn inference (#2016) 0f9f0b4 Add matmul benchmark (#2007) 45045cd Enable tests previously disabled due to an aliasing bug (#2005) 967aa77 Contiguous indexing for View operations (#1990) a43cb20 Make inlining even more modular (#2004) dc45835 Test util cleanup (#2003) 3ca21eb More strict validation (#2000) a7a7d57 Fix build problem (#1999) fc235b0 Just fixes comments (#1998) 482386c cleanup (#1997) 4cbe0db Improve divisible split detection (#1970) 42ccc52 Minor build fix. (#1996) fcf8c09 Cleanup of lower_utils.cpp: Isolate out GpuLower usage (#1989) 15f2f6d Move ConcretizedBroadcastDomains to shared_ptr in GpuLower. (#1988) 8f1c7f5 Minor cleanup lower_unroll.cpp (#1994) 1d9858c Minor cleanup (#1992) f262d9c Add support for uniform RNG (#1986) eb1dad1 Remove non-const functions, remove GpuLower instance on build, pass in ca_map. (#1987) 634820c Add support for some empty fusion (#1981) eabe8d8 Segment self mapping fusions (#1954) e96aacf Enable Transpose operation (#1882) 425dce2 Add a null scheduler that helps segmenting away no-op schedules (#1835) 306d4a6 Fix canScheduleCompileTime check of transpose scheduler (#1969) b1bd32c Minor fix (#1967) bd93578 Enable transpose scheduler (#1927) b7a206e Move scheduler vectorize utilities into their own file (#1959) d9420e4 View scheduling (#1928) c668e13 Upstream push ci fixes (#1965) c40202b Fix dump effective bandwidth (#1962) 93505bc WAR on index mapping when exact and permissive maps differ (#1960) 45e95fd Allow splitting inner-most ID to create virtual innermost ID in transpose scheduler (#1930) a3ecb33 Improve the comments at the beginning of index_compute.h (#1946) f7bc341 Remove unused variables (#1955) df3393a Some cleanup (#1957) 7d1d7c8 TVDomainGuard factory (#1953) 357ba22 Fill allocation with nan on tests (#1956) 8eafc54 Fix detection of unmappable root domains (#1952) 90a51f2 Some indexing cleanups, Add eye support (#1940) ddc01e4 Exclude unsupported data types (#1951) 992e17c test the groups the same order as they are merged (#1949) 208262b Move detection of self mapping IDs to IterDomainGraph from (#1941) ac4de38 Merge pull request #1945 from csarofeen/master_merge_0828 6310948 Add full, full_like, zeros, zeros_like, ones, ones_like (#1943) aab10bc Merge remote-tracking branch 'upstream/viable/strict' into HEAD 4c254c0 Fix arange when step is negative (#1942) 89330aa Tensor factories must set the output shape as its input (#1939) ``` RUN_TORCHBENCH: nvfuser Differential Revision: [D40869846](https://our.internmc.facebook.com/intern/diff/D40869846) [ghstack-poisoned]
7117a7e patching nvfuser conv cudnn test numerics mismatch (#2048) 65af1a4 Inserting sync for redundant parallel types is already done at the (#2023) 6ac74d1 Fix sync map (#2047) f5bca33 Bank conflict checker improvements (#2032) d2ca7e3 Minor update on cp.async code generation. (#1901) d36cf61 Test file cleanup (#2040) 0b8e83f Allow non-root trivial reductions (#2037) a2dfe40 Fix vectorize size calculation (#2035) e040676 Use withPredicate to replace setPredicate to maintain Exprs immutable (#2025) 197221b removing ci workflow (#2034) 40e2703 Reduction rand like patch (#2031) bc77266 Add utility for checking bank conflict of shared memory (#2029) ddd1cf7 Add back FusionReductionWithTrivialReduction_CUDA (#2030) fbd97e5 Revert "Cleanup trivial reduction workarounds (#2006)" (#2024) bca20c1 Cleanup trivial reduction workarounds (#2006) e4b6585 Trivial forwarding (#1995) 1a0e355 Fix contiguity analysis of predicates to match updated contiguity. (#1991) a4effa6 Enable output allocation cache (#2010) 35440b7 Patching bn inference (#2016) 0f9f0b4 Add matmul benchmark (#2007) 45045cd Enable tests previously disabled due to an aliasing bug (#2005) 967aa77 Contiguous indexing for View operations (#1990) a43cb20 Make inlining even more modular (#2004) dc45835 Test util cleanup (#2003) 3ca21eb More strict validation (#2000) a7a7d57 Fix build problem (#1999) fc235b0 Just fixes comments (#1998) 482386c cleanup (#1997) 4cbe0db Improve divisible split detection (#1970) 42ccc52 Minor build fix. (#1996) fcf8c09 Cleanup of lower_utils.cpp: Isolate out GpuLower usage (#1989) 15f2f6d Move ConcretizedBroadcastDomains to shared_ptr in GpuLower. (#1988) 8f1c7f5 Minor cleanup lower_unroll.cpp (#1994) 1d9858c Minor cleanup (#1992) f262d9c Add support for uniform RNG (#1986) eb1dad1 Remove non-const functions, remove GpuLower instance on build, pass in ca_map. (#1987) 634820c Add support for some empty fusion (#1981) eabe8d8 Segment self mapping fusions (#1954) e96aacf Enable Transpose operation (#1882) 425dce2 Add a null scheduler that helps segmenting away no-op schedules (#1835) 306d4a6 Fix canScheduleCompileTime check of transpose scheduler (#1969) b1bd32c Minor fix (#1967) bd93578 Enable transpose scheduler (#1927) b7a206e Move scheduler vectorize utilities into their own file (#1959) d9420e4 View scheduling (#1928) c668e13 Upstream push ci fixes (#1965) c40202b Fix dump effective bandwidth (#1962) 93505bc WAR on index mapping when exact and permissive maps differ (#1960) 45e95fd Allow splitting inner-most ID to create virtual innermost ID in transpose scheduler (#1930) a3ecb33 Improve the comments at the beginning of index_compute.h (#1946) f7bc341 Remove unused variables (#1955) df3393a Some cleanup (#1957) 7d1d7c8 TVDomainGuard factory (#1953) 357ba22 Fill allocation with nan on tests (#1956) 8eafc54 Fix detection of unmappable root domains (#1952) 90a51f2 Some indexing cleanups, Add eye support (#1940) ddc01e4 Exclude unsupported data types (#1951) 992e17c test the groups the same order as they are merged (#1949) 208262b Move detection of self mapping IDs to IterDomainGraph from (#1941) ac4de38 Merge pull request #1945 from csarofeen/master_merge_0828 6310948 Add full, full_like, zeros, zeros_like, ones, ones_like (#1943) aab10bc Merge remote-tracking branch 'upstream/viable/strict' into HEAD 4c254c0 Fix arange when step is negative (#1942) 89330aa Tensor factories must set the output shape as its input (#1939) RUN_TORCHBENCH: nvfuser ghstack-source-id: 424af8e Pull Request resolved: #87779
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/ Codegen changes include: * codegen improvement: i. allow non-root trivial reductions, allow empty/no-op fusion ii. fixes vectorization checks and size calculation iii. bank conflict handle improvement iv. enables transpose scheduler * misc: i. CI tests failure fixes ii. cpp tests file clean up iii. trivial forwarding supports added in codegen runtime iv. added factory methods support in codegen Commits that's in this PR from the devel branch: ``` 7117a7e patching nvfuser conv cudnn test numerics mismatch (#2048) 65af1a4 Inserting sync for redundant parallel types is already done at the (#2023) 6ac74d1 Fix sync map (#2047) f5bca33 Bank conflict checker improvements (#2032) d2ca7e3 Minor update on cp.async code generation. (#1901) d36cf61 Test file cleanup (#2040) 0b8e83f Allow non-root trivial reductions (#2037) a2dfe40 Fix vectorize size calculation (#2035) e040676 Use withPredicate to replace setPredicate to maintain Exprs immutable (#2025) 197221b removing ci workflow (#2034) 40e2703 Reduction rand like patch (#2031) bc77266 Add utility for checking bank conflict of shared memory (#2029) ddd1cf7 Add back FusionReductionWithTrivialReduction_CUDA (#2030) fbd97e5 Revert "Cleanup trivial reduction workarounds (#2006)" (#2024) bca20c1 Cleanup trivial reduction workarounds (#2006) e4b6585 Trivial forwarding (#1995) 1a0e355 Fix contiguity analysis of predicates to match updated contiguity. (#1991) a4effa6 Enable output allocation cache (#2010) 35440b7 Patching bn inference (#2016) 0f9f0b4 Add matmul benchmark (#2007) 45045cd Enable tests previously disabled due to an aliasing bug (#2005) 967aa77 Contiguous indexing for View operations (#1990) a43cb20 Make inlining even more modular (#2004) dc45835 Test util cleanup (#2003) 3ca21eb More strict validation (#2000) a7a7d57 Fix build problem (#1999) fc235b0 Just fixes comments (#1998) 482386c cleanup (#1997) 4cbe0db Improve divisible split detection (#1970) 42ccc52 Minor build fix. (#1996) fcf8c09 Cleanup of lower_utils.cpp: Isolate out GpuLower usage (#1989) 15f2f6d Move ConcretizedBroadcastDomains to shared_ptr in GpuLower. (#1988) 8f1c7f5 Minor cleanup lower_unroll.cpp (#1994) 1d9858c Minor cleanup (#1992) f262d9c Add support for uniform RNG (#1986) eb1dad1 Remove non-const functions, remove GpuLower instance on build, pass in ca_map. (#1987) 634820c Add support for some empty fusion (#1981) eabe8d8 Segment self mapping fusions (#1954) e96aacf Enable Transpose operation (#1882) 425dce2 Add a null scheduler that helps segmenting away no-op schedules (#1835) 306d4a6 Fix canScheduleCompileTime check of transpose scheduler (#1969) b1bd32c Minor fix (#1967) bd93578 Enable transpose scheduler (#1927) b7a206e Move scheduler vectorize utilities into their own file (#1959) d9420e4 View scheduling (#1928) c668e13 Upstream push ci fixes (#1965) c40202b Fix dump effective bandwidth (#1962) 93505bc WAR on index mapping when exact and permissive maps differ (#1960) 45e95fd Allow splitting inner-most ID to create virtual innermost ID in transpose scheduler (#1930) a3ecb33 Improve the comments at the beginning of index_compute.h (#1946) f7bc341 Remove unused variables (#1955) df3393a Some cleanup (#1957) 7d1d7c8 TVDomainGuard factory (#1953) 357ba22 Fill allocation with nan on tests (#1956) 8eafc54 Fix detection of unmappable root domains (#1952) 90a51f2 Some indexing cleanups, Add eye support (#1940) ddc01e4 Exclude unsupported data types (#1951) 992e17c test the groups the same order as they are merged (#1949) 208262b Move detection of self mapping IDs to IterDomainGraph from (#1941) ac4de38 Merge pull request #1945 from csarofeen/master_merge_0828 6310948 Add full, full_like, zeros, zeros_like, ones, ones_like (#1943) aab10bc Merge remote-tracking branch 'upstream/viable/strict' into HEAD 4c254c0 Fix arange when step is negative (#1942) 89330aa Tensor factories must set the output shape as its input (#1939) ``` RUN_TORCHBENCH: nvfuser Differential Revision: [D40869846](https://our.internmc.facebook.com/intern/diff/D40869846) [ghstack-poisoned]
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/ Codegen changes include: * codegen improvement: i. allow non-root trivial reductions, allow empty/no-op fusion ii. fixes vectorization checks and size calculation iii. bank conflict handle improvement iv. enables transpose scheduler * misc: i. CI tests failure fixes ii. cpp tests file clean up iii. trivial forwarding supports added in codegen runtime iv. added factory methods support in codegen Commits that's in this PR from the devel branch: ``` 7117a7e patching nvfuser conv cudnn test numerics mismatch (#2048) 65af1a4 Inserting sync for redundant parallel types is already done at the (#2023) 6ac74d1 Fix sync map (#2047) f5bca33 Bank conflict checker improvements (#2032) d2ca7e3 Minor update on cp.async code generation. (#1901) d36cf61 Test file cleanup (#2040) 0b8e83f Allow non-root trivial reductions (#2037) a2dfe40 Fix vectorize size calculation (#2035) e040676 Use withPredicate to replace setPredicate to maintain Exprs immutable (#2025) 197221b removing ci workflow (#2034) 40e2703 Reduction rand like patch (#2031) bc77266 Add utility for checking bank conflict of shared memory (#2029) ddd1cf7 Add back FusionReductionWithTrivialReduction_CUDA (#2030) fbd97e5 Revert "Cleanup trivial reduction workarounds (#2006)" (#2024) bca20c1 Cleanup trivial reduction workarounds (#2006) e4b6585 Trivial forwarding (#1995) 1a0e355 Fix contiguity analysis of predicates to match updated contiguity. (#1991) a4effa6 Enable output allocation cache (#2010) 35440b7 Patching bn inference (#2016) 0f9f0b4 Add matmul benchmark (#2007) 45045cd Enable tests previously disabled due to an aliasing bug (#2005) 967aa77 Contiguous indexing for View operations (#1990) a43cb20 Make inlining even more modular (#2004) dc45835 Test util cleanup (#2003) 3ca21eb More strict validation (#2000) a7a7d57 Fix build problem (#1999) fc235b0 Just fixes comments (#1998) 482386c cleanup (#1997) 4cbe0db Improve divisible split detection (#1970) 42ccc52 Minor build fix. (#1996) fcf8c09 Cleanup of lower_utils.cpp: Isolate out GpuLower usage (#1989) 15f2f6d Move ConcretizedBroadcastDomains to shared_ptr in GpuLower. (#1988) 8f1c7f5 Minor cleanup lower_unroll.cpp (#1994) 1d9858c Minor cleanup (#1992) f262d9c Add support for uniform RNG (#1986) eb1dad1 Remove non-const functions, remove GpuLower instance on build, pass in ca_map. (#1987) 634820c Add support for some empty fusion (#1981) eabe8d8 Segment self mapping fusions (#1954) e96aacf Enable Transpose operation (#1882) 425dce2 Add a null scheduler that helps segmenting away no-op schedules (#1835) 306d4a6 Fix canScheduleCompileTime check of transpose scheduler (#1969) b1bd32c Minor fix (#1967) bd93578 Enable transpose scheduler (#1927) b7a206e Move scheduler vectorize utilities into their own file (#1959) d9420e4 View scheduling (#1928) c668e13 Upstream push ci fixes (#1965) c40202b Fix dump effective bandwidth (#1962) 93505bc WAR on index mapping when exact and permissive maps differ (#1960) 45e95fd Allow splitting inner-most ID to create virtual innermost ID in transpose scheduler (#1930) a3ecb33 Improve the comments at the beginning of index_compute.h (#1946) f7bc341 Remove unused variables (#1955) df3393a Some cleanup (#1957) 7d1d7c8 TVDomainGuard factory (#1953) 357ba22 Fill allocation with nan on tests (#1956) 8eafc54 Fix detection of unmappable root domains (#1952) 90a51f2 Some indexing cleanups, Add eye support (#1940) ddc01e4 Exclude unsupported data types (#1951) 992e17c test the groups the same order as they are merged (#1949) 208262b Move detection of self mapping IDs to IterDomainGraph from (#1941) ac4de38 Merge pull request #1945 from csarofeen/master_merge_0828 6310948 Add full, full_like, zeros, zeros_like, ones, ones_like (#1943) aab10bc Merge remote-tracking branch 'upstream/viable/strict' into HEAD 4c254c0 Fix arange when step is negative (#1942) 89330aa Tensor factories must set the output shape as its input (#1939) ``` RUN_TORCHBENCH: nvfuser Differential Revision: [D40869846](https://our.internmc.facebook.com/intern/diff/D40869846) Pull Request resolved: #87779 Approved by: https://github.com/davidberard98
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/ Codegen changes include: * codegen improvement: i. allow non-root trivial reductions, allow empty/no-op fusion ii. fixes vectorization checks and size calculation iii. bank conflict handle improvement iv. enables transpose scheduler * misc: i. CI tests failure fixes ii. cpp tests file clean up iii. trivial forwarding supports added in codegen runtime iv. added factory methods support in codegen Commits that's in this PR from the devel branch: ``` 7117a7e patching nvfuser conv cudnn test numerics mismatch (pytorch#2048) 65af1a4 Inserting sync for redundant parallel types is already done at the (pytorch#2023) 6ac74d1 Fix sync map (pytorch#2047) f5bca33 Bank conflict checker improvements (pytorch#2032) d2ca7e3 Minor update on cp.async code generation. (pytorch#1901) d36cf61 Test file cleanup (pytorch#2040) 0b8e83f Allow non-root trivial reductions (pytorch#2037) a2dfe40 Fix vectorize size calculation (pytorch#2035) e040676 Use withPredicate to replace setPredicate to maintain Exprs immutable (pytorch#2025) 197221b removing ci workflow (pytorch#2034) 40e2703 Reduction rand like patch (pytorch#2031) bc77266 Add utility for checking bank conflict of shared memory (pytorch#2029) ddd1cf7 Add back FusionReductionWithTrivialReduction_CUDA (pytorch#2030) fbd97e5 Revert "Cleanup trivial reduction workarounds (pytorch#2006)" (pytorch#2024) bca20c1 Cleanup trivial reduction workarounds (pytorch#2006) e4b6585 Trivial forwarding (pytorch#1995) 1a0e355 Fix contiguity analysis of predicates to match updated contiguity. (pytorch#1991) a4effa6 Enable output allocation cache (pytorch#2010) 35440b7 Patching bn inference (pytorch#2016) 0f9f0b4 Add matmul benchmark (pytorch#2007) 45045cd Enable tests previously disabled due to an aliasing bug (pytorch#2005) 967aa77 Contiguous indexing for View operations (pytorch#1990) a43cb20 Make inlining even more modular (pytorch#2004) dc45835 Test util cleanup (pytorch#2003) 3ca21eb More strict validation (pytorch#2000) a7a7d57 Fix build problem (pytorch#1999) fc235b0 Just fixes comments (pytorch#1998) 482386c cleanup (pytorch#1997) 4cbe0db Improve divisible split detection (pytorch#1970) 42ccc52 Minor build fix. (pytorch#1996) fcf8c09 Cleanup of lower_utils.cpp: Isolate out GpuLower usage (pytorch#1989) 15f2f6d Move ConcretizedBroadcastDomains to shared_ptr in GpuLower. (pytorch#1988) 8f1c7f5 Minor cleanup lower_unroll.cpp (pytorch#1994) 1d9858c Minor cleanup (pytorch#1992) f262d9c Add support for uniform RNG (pytorch#1986) eb1dad1 Remove non-const functions, remove GpuLower instance on build, pass in ca_map. (pytorch#1987) 634820c Add support for some empty fusion (pytorch#1981) eabe8d8 Segment self mapping fusions (pytorch#1954) e96aacf Enable Transpose operation (pytorch#1882) 425dce2 Add a null scheduler that helps segmenting away no-op schedules (pytorch#1835) 306d4a6 Fix canScheduleCompileTime check of transpose scheduler (pytorch#1969) b1bd32c Minor fix (pytorch#1967) bd93578 Enable transpose scheduler (pytorch#1927) b7a206e Move scheduler vectorize utilities into their own file (pytorch#1959) d9420e4 View scheduling (pytorch#1928) c668e13 Upstream push ci fixes (pytorch#1965) c40202b Fix dump effective bandwidth (pytorch#1962) 93505bc WAR on index mapping when exact and permissive maps differ (pytorch#1960) 45e95fd Allow splitting inner-most ID to create virtual innermost ID in transpose scheduler (pytorch#1930) a3ecb33 Improve the comments at the beginning of index_compute.h (pytorch#1946) f7bc341 Remove unused variables (pytorch#1955) df3393a Some cleanup (pytorch#1957) 7d1d7c8 TVDomainGuard factory (pytorch#1953) 357ba22 Fill allocation with nan on tests (pytorch#1956) 8eafc54 Fix detection of unmappable root domains (pytorch#1952) 90a51f2 Some indexing cleanups, Add eye support (pytorch#1940) ddc01e4 Exclude unsupported data types (pytorch#1951) 992e17c test the groups the same order as they are merged (pytorch#1949) 208262b Move detection of self mapping IDs to IterDomainGraph from (pytorch#1941) ac4de38 Merge pull request pytorch#1945 from csarofeen/master_merge_0828 6310948 Add full, full_like, zeros, zeros_like, ones, ones_like (pytorch#1943) aab10bc Merge remote-tracking branch 'upstream/viable/strict' into HEAD 4c254c0 Fix arange when step is negative (pytorch#1942) 89330aa Tensor factories must set the output shape as its input (pytorch#1939) ``` RUN_TORCHBENCH: nvfuser Differential Revision: [D40869846](https://our.internmc.facebook.com/intern/diff/D40869846) Pull Request resolved: pytorch#87779 Approved by: https://github.com/davidberard98
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/ Codegen changes include: * codegen improvement: i. allow non-root trivial reductions, allow empty/no-op fusion ii. fixes vectorization checks and size calculation iii. bank conflict handle improvement iv. enables transpose scheduler * misc: i. CI tests failure fixes ii. cpp tests file clean up iii. trivial forwarding supports added in codegen runtime iv. added factory methods support in codegen Commits that's in this PR from the devel branch: ``` 7117a7e patching nvfuser conv cudnn test numerics mismatch (pytorch#2048) 65af1a4 Inserting sync for redundant parallel types is already done at the (pytorch#2023) 6ac74d1 Fix sync map (pytorch#2047) f5bca33 Bank conflict checker improvements (pytorch#2032) d2ca7e3 Minor update on cp.async code generation. (pytorch#1901) d36cf61 Test file cleanup (pytorch#2040) 0b8e83f Allow non-root trivial reductions (pytorch#2037) a2dfe40 Fix vectorize size calculation (pytorch#2035) e040676 Use withPredicate to replace setPredicate to maintain Exprs immutable (pytorch#2025) 197221b removing ci workflow (pytorch#2034) 40e2703 Reduction rand like patch (pytorch#2031) bc77266 Add utility for checking bank conflict of shared memory (pytorch#2029) ddd1cf7 Add back FusionReductionWithTrivialReduction_CUDA (pytorch#2030) fbd97e5 Revert "Cleanup trivial reduction workarounds (pytorch#2006)" (pytorch#2024) bca20c1 Cleanup trivial reduction workarounds (pytorch#2006) e4b6585 Trivial forwarding (pytorch#1995) 1a0e355 Fix contiguity analysis of predicates to match updated contiguity. (pytorch#1991) a4effa6 Enable output allocation cache (pytorch#2010) 35440b7 Patching bn inference (pytorch#2016) 0f9f0b4 Add matmul benchmark (pytorch#2007) 45045cd Enable tests previously disabled due to an aliasing bug (pytorch#2005) 967aa77 Contiguous indexing for View operations (pytorch#1990) a43cb20 Make inlining even more modular (pytorch#2004) dc45835 Test util cleanup (pytorch#2003) 3ca21eb More strict validation (pytorch#2000) a7a7d57 Fix build problem (pytorch#1999) fc235b0 Just fixes comments (pytorch#1998) 482386c clean 8000 up (pytorch#1997) 4cbe0db Improve divisible split detection (pytorch#1970) 42ccc52 Minor build fix. (pytorch#1996) fcf8c09 Cleanup of lower_utils.cpp: Isolate out GpuLower usage (pytorch#1989) 15f2f6d Move ConcretizedBroadcastDomains to shared_ptr in GpuLower. (pytorch#1988) 8f1c7f5 Minor cleanup lower_unroll.cpp (pytorch#1994) 1d9858c Minor cleanup (pytorch#1992) f262d9c Add support for uniform RNG (pytorch#1986) eb1dad1 Remove non-const functions, remove GpuLower instance on build, pass in ca_map. (pytorch#1987) 634820c Add support for some empty fusion (pytorch#1981) eabe8d8 Segment self mapping fusions (pytorch#1954) e96aacf Enable Transpose operation (pytorch#1882) 425dce2 Add a null scheduler that helps segmenting away no-op schedules (pytorch#1835) 306d4a6 Fix canScheduleCompileTime check of transpose scheduler (pytorch#1969) b1bd32c Minor fix (pytorch#1967) bd93578 Enable transpose scheduler (pytorch#1927) b7a206e Move scheduler vectorize utilities into their own file (pytorch#1959) d9420e4 View scheduling (pytorch#1928) c668e13 Upstream push ci fixes (pytorch#1965) c40202b Fix dump effective bandwidth (pytorch#1962) 93505bc WAR on index mapping when exact and permissive maps differ (pytorch#1960) 45e95fd Allow splitting inner-most ID to create virtual innermost ID in transpose scheduler (pytorch#1930) a3ecb33 Improve the comments at the beginning of index_compute.h (pytorch#1946) f7bc341 Remove unused variables (pytorch#1955) df3393a Some cleanup (pytorch#1957) 7d1d7c8 TVDomainGuard factory (pytorch#1953) 357ba22 Fill allocation with nan on tests (pytorch#1956) 8eafc54 Fix detection of unmappable root domains (pytorch#1952) 90a51f2 Some indexing cleanups, Add eye support (pytorch#1940) ddc01e4 Exclude unsupported data types (pytorch#1951) 992e17c test the groups the same order as they are merged (pytorch#1949) 208262b Move detection of self mapping IDs to IterDomainGraph from (pytorch#1941) ac4de38 Merge pull request pytorch#1945 from csarofeen/master_merge_0828 6310948 Add full, full_like, zeros, zeros_like, ones, ones_like (pytorch#1943) aab10bc Merge remote-tracking branch 'upstream/viable/strict' into HEAD 4c254c0 Fix arange when step is negative (pytorch#1942) 89330aa Tensor factories must set the output shape as its input (pytorch#1939) ``` RUN_TORCHBENCH: nvfuser Differential Revision: [D40869846](https://our.internmc.facebook.com/intern/diff/D40869846) Pull Request resolved: pytorch#87779 Approved by: https://github.com/davidberard98
cherry-pick of ROCm@2b90647 (cherry picked from commit f678916)
See #1601 for previous discussion on layer normalization.
The text was updated successfully, but these errors were encountered: