-
Notifications
You must be signed in to change notification settings - Fork 74.7k
Implement Fast Fourier Transform ops #386
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
8000By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
We'll need to add a few FFT ops to get this done -- probably using cufft or fbfft for GPU. Assigning to @zffchen78 since he has experience in ffts and adding ops :) |
Thank you @vrv and @zffchen78 . Basically, these new types of RNN's are very promising and I hope to integrate them into TensorFlow over the next week if possible. I'm starting to work on it here: However, it will not be possible to do without FFT and IFFT support so I really appreciate your help! |
So would this imply that the FFT ops only work for GPU and not CPU? Or just very very slow on CPU? |
Well the problem is that the actual unitary RNN uses FFT and IFFT within its cell. Ideally you want the cell's computation done in parallel on the GPU. If its on the CPU, its going to take forever. Here's the paper: Look at their main equation http://arxiv.org/pdf/1511.06464v1.pdf (its number 6) |
With 'taking forever', you mean compared to the uRNN GPU version or also compared to a 'classical' RNN or a LSTM with the same number of parameters? I would at least hope the speed to convergence demonstrated in the paper would outweigh any extra computation time per iteration. :) Anyways, I would expect that all ops defined in Tensorflow would be available for both CPU and GPU (and any future device). Me and I guess others don't have day to day access to a suitable GPU. Or is that not one of the design goals? |
We'll try to provide an implementation for both. |
Perfect! Thanks! @adder, I primarily use GPUs and if you're going to use unitary RNN's for some serious language modeling, you gotta use GPU's (10x faster training time). I know they may converge more quickly (one of the benefits), but keep in mind that an epoch usually takes at least 24 hrs with normal LSTM's and GRU's for big datasets. So we simply can't afford to put them on CPUs. |
@LeavesBreathe , can you elaborate a bit more on your requirements about FFT/IFFT?
|
I guess you should ask this questions to @LeavesBreathe :) |
@LeavesBreathe, is this the paper you are trying to reproduce? http://arxiv.org/pdf/1511.06464v1.pdf And formula (6) is the core computation you need to do. AFAIK, you need 2D, non-batched, complex-to-complex fft/ifft. Is that correct? |
Thanks @adder! @zffchen78, thanks for your help in this matter. I will do my best to answer your questions. The ultimate goal is to replicate this paper: http://arxiv.org/pdf/1511.06464v1.pdf To do so, I was planning on making a new class in RNN_cell.py. Therefore, I believe with need complex to complex along with 2d support. The reason why I wanted a pythonic tensorflow op is so that we can assign multiple gpu's to multiple unitary RNN's when the whole integration is complete. So one uRNN would be on gpu 1, and another uRNN would be on gpu 2. I don't know how hard it is for you to implement these fft's and ifft's but I think 3d support would be nice for future users who may try unitary conv nets. I certainly don't want to ask too much of you though! If this is of any help to you, the goal is to replicate "Complex_RNN" here: https://github.com/amarshah/complex_RNN/blob/master/models.py#L532 |
I would also very much like to see fft support, not for the training of dnn models, but for the feature extraction in a speech recognition pipeline. In this case only r2c, 1d would be required, preferably with batching both in cpu and gpu. |
Hey @zffchen78 , I just wanted to follow up and ask if this was implemented yet? I can't make any progress on the unitary RNN's without it. Don't mean to bring on any pressure! Just wanted to ask. |
I implemented 2D fft in TF code base already. Hopefully, it'll be copied in OSS soon. |
Okay thanks for the headsup. Looking forward to it! |
@LeavesBreathe Why close this, it isn't there yet! |
@LeavesBreathe, we pushed changes today which enables fft2d and ifft2d on gpu. You can take a look of python/kernel_tests/fft_ops_test.py to see if it works for you. We are still figuring out license issues w/ fftw where cpu support for fft2d/ifft2d needs. Hopefully, gpu supports are sufficient for you now. Let me know if you see any problems. |
Awesome, gpu support is really all i needed so let me test it out and get back to you. I am going to be pretty busy over the holidays but come January 4 I should be back to testing this! |
@LeavesBreathe, I took a brief of the paper you plan to replicate in TF. I suspect you'll encounter some nuances. E.g., some operations may not have support for complex64 as time being. They are minor problems because they can be easily patched by updating a few .cc file to expand the types they support. |
OKay this is very good to know @zffchen78. I will be sure to look into these formats. |
I think @LeavesBreathe answer to @zffchen78 question was wrong. What uRNN needs is batched 1d fft. Both the paper and their theano implementation indicate this. Paper wise, the Equation (6) is certainly multiplied with a complex vector for each data instance. Implementation wise, check the following call stack,
To TF core developer: why not simply implement an interface like scikits.cuda: https://github.com/lebedov/scikit-cuda/blob/master/skcuda/fft.py |
We welcome the community to contribute the code. If one wants to make the fft op supports more complete in tensorflow, feel On Fri, Jan 15, 2016 at 9:10 AM Raingo notifications@github.com wrote:
|
The current plan of record is that @tillahoffmann is working on a CPU-based FFT/RFFT using Eigen's TensorFFT module. No firm ETA but hopefully soon -- thanks @tillahoffmann! |
@tillahoffmann, do you have any updates on progress to the CPU implementation of FFT? Many thanks for this. |
@pawarrick, complex FFT and IFFT as well as the RFFT are now in master. The IRFFT still needs a bit of work. |
yes thanks very much for your contribution. I saw the discussion about it in this thread #9029 |
@tillahoffmann Very much thanks for your contribution! I've made up the IRFFT part, see #10127 |
Thanks to @tillahoffmann (#9029) and @Czxck001 (#10127), I think we can call this done! |
Great work! Now can I finally hack any fft operation on my laptop. |
@beniroquai |
Hi, I just tried running FFT ops on the CPU with the 1.2.1 release, but still get errors like this: Am I missing something? |
@uschmidt83, have you tried the nightly build? |
Is it scheduled to put it also in the release version at one point?
…On 21 Jul 2017 16:49, "Till Hoffmann" ***@***.***> wrote:
@uschmidt83 <https://github.com/uschmidt83>, have you tried the nightly
build <https://github.com/tensorflow/tensorflow#installation>?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#386 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AEJOuFzD6pxxPx-eCuMp5Cyib7Y93tLzks5sQLpqgaJpZM4GsRs5>
.
|
It should be in TF 1.3 which is in the process of being released! Also note
tf.contrib.signal in this release which has some signal processing basics
that are useful for e.g. computing spectrograms.
…On Fri, Jul 21, 2017, 9:09 AM beniroquai ***@***.***> wrote:
Is it scheduled to put it also in the release version at one point?
On 21 Jul 2017 16:49, "Till Hoffmann" ***@***.***> wrote:
> @uschmidt83 <https://github.com/uschmidt83>, have you tried the nightly
> build <https://github.com/tensorflow/tensorflow#installation>?
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <
#386 (comment)
>,
> or mute the thread
> <
https://github.com/notifications/unsubscribe-auth/AEJOuFzD6pxxPx-eCuMp5Cyib7Y93tLzks5sQLpqgaJpZM4GsRs5
>
> .
>
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
<#386 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AABnn5k1w8JJTN6yDwV-1ZtDnoDI1dv3ks5sQM0kgaJpZM4GsRs5>
.
|
Thanks, everyone! It does work for me with the 1.3.0-rc0 release. |
there isn't an alias for real ffts in TF 1.3.0 present functions: present aliases: can someone please create those missing aliases for the next TF release? |
Hi Federico,
This is intentional as we are trying to avoid polluting the top-level
namespace in favor of more specific submodules. tf.fft and other fft
symbols in the tf module are deprecated now :).
…On Fri, Sep 15, 2017, 8:06 AM Federico Muciaccia ***@***.***> wrote:
there isn't an alias for real ffts in TF 1.3.0
present functions:
tf.spectral.fft
tf.spectral.fft2d
tf.spectral.fft3d
tf.spectral.ifft
tf.spectral.ifft2d
tf.spectral.ifft3d
tf.spectral.irfft
tf.spectral.irfft2d
tf.spectral.irfft3d
tf.spectral.rfft
tf.spectral.rfft2d
tf.spectral.rfft3d
present aliases:
tf.fft
tf.fft2d
tf.fft3d
tf.ifft
tf.ifft2d
tf.ifft3d
can someone please create those missing aliases for the next TF release?
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
<#386 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AABnn81zqHwa1D8Hq7cPScHY1S0J14wxks5sipJtgaJpZM4GsRs5>
.
|
Hi everyone, I would like to do some work on the basis of this article, but the original author uses theano.tensor.fft module to implement batch 1-D FFT, batch 2-D FFT, batch 1-D IFFT, batch 2-D IFFT. Now i want to replace the theano.tensor.fft module directly into tensorflow in tf.fft and tf.fft2d, ok? https://github.com/ChihebTrabelsi/deep_complex_networks/blob/master/complexnn/fft.py We look forward to your help, thank you very much! |
@WangNuoWa -- yes, tf.fft/tf.ifft and tf.fft2d/tf.ifft2d should be drop-in replacements for those (though I have never used theano.tensor.fft), perhaps with slightly different normalization strategies though (I see an "ortho" norm in there). |
Is tf.fft differentiable? |
Yes!
…On Wed, Dec 13, 2017, 5:10 PM Abhejit Rajagopal ***@***.***> wrote:
Is tf.fft differentiable?
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
<#386 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AABnn9VRd1EzVu5grKydHNsOzqLhNmrPks5tAHWcgaJpZM4GsRs5>
.
|
@rryan, is there anything special we would have to do? I get new warnings now, like these: (you can copy paste the whole thing)
If its relevant:
|
I think what you're seeing is independent of whether FFT is differentiable. Here's a colab where I copied your example. You need to do as the error message says and provide a You can see this logic here: tensorflow/tensorflow/python/ops/gradients_impl.py Lines 232 to 239 in 70e33e0
|
Hi, I was wondering whether you guys at TF were planning to implement a non-uniform FFT layer. Cheers |
@zaccharieramzi nobody on the TF team is currently considering that, as far as I know. AIUI, the most efficient algorithms for it are defined in terms of the FFT -- is it an easy extension of the FFT ops that already exist? If so, I would be happy to code review an addition to tf.signal to add it if someone submits a pull request. If it is more involved to implement, for example requiring custom kernels, then I think it might be a harder sell to include since there has not been high demand for it, and adding custom kernels increasing the binary size and compile time of TensorFlow. Could you please file a GitHub issue requesting the feature? Anyone else who is interested would then have a nice place to add a +1 so we can better understand how many people would benefit from it being in TensorFlow itself (as opposed to being implemented as a standalone library). |
Hi @rryan , Unfortunately it's not a direct extension of the FFT afaik (I am still new to the topic). So I think it does require a custom kernel. I totally understand that it could be a hard sell and I will file a github issue requesting the feature (I am totally not familiar with C/C++ so I don't think I could implement it myself). In the meantime I will use ODL to achieve my goals, although slower because the implementation is in python. EDIT: done in #27193 . |
…kip-hcc-runtime-for-hipclang Ignore HCC runtime library in case hip-clang toolchain is used
Hello, I'm getting this not implemented error...
Is there any way I could get this working myself? I've taken a look at where the ops for 1D/2D (I)RFFT are defined in fft_ops.py but I'm a bit lost as to what I'd need to do to implement it in 3D. Thanks in advance for any advice! |
I don't have tensorflow installed at the moment to verify, but the three-dimensional RFFT is just a two-dimensional RFFT plus a FFT along the leading dimension. Here's a numpy example. import numpy as np
x = np.random.normal(0, 1, (15, 16, 17))
# Compute three-dimensional FFT.
fftn = np.fft.rfftn(x, x.shape)
# Do it in two steps.
fftn_two_step = np.fft.fft(np.fft.rfft2(x), axis=0)
np.testing.assert_allclose(fftn, fftn_two_step) This should allow you to get the gradients by decomposing the three-dimensional transform into a one-dimensional and a two-dimensional transform (assuming those two have gradients implemented). |
Hey TF,
Trying to integrate the new Unitary RNN's into tensorflow. Unfortunately, they require Fast Fourier Transform and Inverse Fast Fourier Transforms.
For efficiency this needs to be done on the GPU. On the author's code (theano), they do this by making a Theano Op, and inserting scikit's Fast Fourier and Inverse Fast Fourier Here:
https://github.com/amarshah/complex_RNN/blob/master/fftconv.py
How can this be done in TF? Upon completing the Unitary RNN, I plan to share it to everyone! I asked on the google groups, but didn't get a reply.
The text was updated successfully, but these errors were encountered: