-
Notifications
You must be signed in to change notification settings - Fork 2.5k
Feature request: TensorPadOp #5471
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
See related issue: #5500. Do you think it would solve well your problem for the performance part? I think for 1d, with subtensor and concatenate, we should be performant enough. But for 2d, probably not. Did you do timing with the current ops we have? Maybe it is fast enough even if not super efficient as the bottleneck should still be the convolution itself. So maybe we don't care much about the efficiency of this implementation. |
@nouiz The whole starting point of introducing "elemwise" padding is that I'd like to define some custom filters like in #5618. I'm already using subtensor + concat + img2neib in my current implementation. For small models this is sufficient. I suppose there should be substantial speed up if everything is done in a single kernel, but I really didn't test this as I don't have a working implementation yet. Merging multiple split / concat gives speed up for the padding part but not for the custom filtering part. I might work on this if I've got the time. I'll let you know when a PoC is ready and put up profiling info. |
Just found out this is mostly dup of #1216, closing. |
numpy 1.7+ has
np.pad
Use case
special convolution kernel
Assume conv an image with kernel
[[-1., 0., 1.]]
, instead of usingconv2d
, one could do:This would be useful for simulating PDEs.
This can also be done with
T.join(zeros(...), ...)
but I'm not sure about performance.np.pad
does not support negative padding, but it does not mean Theano shouldn't support it. More explained below.cyclic convolution
cuDNN does not have API for this, so this can be done as
conv2d(pad(..., mode='wrap'))
This could be useful for certain PDE boundary conditions or texture synthesis.
Replaces the current
theano.roll
, which uses subtensor + joinImplementation ideas
In most cases,
pad
behaves as an injection, and sometimes bijection. I think it's good to generate code that can be fused with elemwise Op. The above convolution example should compile into a single kernel like:With this, we can construct cross shaped convolution, bilateral filters, high dimensional conv ... any kind of filter, without worrying about performance.
This makes code simpler.
Without negative padding:
With negative padding:
Note the shape of
x
andy
are same, giving possibility for inplace optimization.Negative padding is supported in Mathematica's ArrayPad and I find it very convenient.
The text was updated successfully, but these errors were encountered: