Reverse FFT of a ratio of two FFTs

Click For Summary
SUMMARY

This discussion focuses on the challenges of performing deconvolution of two real signals using the Fast Fourier Transform (FFT) in R. The primary issue arises when the sampling interval includes negative time points, leading to unexpected results in the inverse FFT. The user is advised that the periodic nature of the FFT causes negative time points to appear at the end of the output array, necessitating a block-swapping operation to correct the final signal. Additionally, the discussion highlights the sensitivity of FFT-based deconvolution to noise in the input data.

PREREQUISITES
  • Understanding of Fast Fourier Transform (FFT) and its properties
  • Familiarity with R programming language and its FFT functions
  • Knowledge of probability density functions and their sampling
  • Concept of periodicity in discrete Fourier transforms
NEXT STEPS
  • Research "R FFT function usage and best practices"
  • Learn about "Deconvolution techniques in signal processing"
  • Explore "Numerical stability in FFT-based algorithms"
  • Investigate "Block-swapping operations in signal processing"
USEFUL FOR

Researchers, data scientists, and engineers working with signal processing, particularly those involved in deconvolution tasks using FFT in R.

eoghan
Messages
201
Reaction score
7
Dear all,
I don't know if this is the correct place for this question, but I did a little search on the forum and I saw that most FFT related questions have been posted here.
My question is this: I need to deconvolve two real signals (in my case they are two probability density functions), so I go in Fourier space, I divide them and then I go back to real space.
Now, before apply FFT I need to sample the signals, i.e. I have to define a sampling interval and frequency. Now, if the sampling interval is on the positive axis (the signal s is sampled at discrete time points s[t], t>=0) then everything is fine, but if the sampling interval includes negative points, then when I perform the inverse FFT on the FFTs of the two signals, the final signal results swapped, i.e. the
signal corresponding to the negative sampling points is put to the right, after the positive sampling points.
Can you please help me understand why this happens? Notice that if I perform iFFT(FFT(s)), where iFFT is the inverse FFT and s is my sampled signal, everything is fine even when the sampling interval comprises negative time points (i.e. s[t], t<0).

Here you can see an example of the code (R language):

Code:
#Define sampling interval
fs <- 0.05
n0 <- -30
nMax <- 100
N=(nMax-n0)/fs
mids <- seq(0, (N-1))*fs+n0

#Sample two continuous probability density functions (fO and fB) that must be deconvolved
fO = density(rnorm(1000000, 40, 6), bw=1.5, from=n0, to=nMax, n=N)
fB = hist(rnorm(1000000, 14, 2), breaks=c((mids-fs/2), (tail(mids,1)+fs/2)), plot=F)$density

#Perform the FFT
fO.fft <- fft(fO$y)
fB.fft <- fft(fB)

#Divide the FFTs (deconvolution) and go back to real space
fT <- abs(fft(fO.fft/(fs*fB.fft), inverse=T))/length(fO$x)

#Plot the results to check the deconvolution
plot(fO$x, fB, type='l', col=1)  #First signal
lines(fO, type='l', col=2) #Second signal
shift <- which(mids==min(abs(mids)))  #Now I need to swap the deconvolved signal... why?
fT.shifted <- c(tail(fT, shift), fT[shift : length(fT)-shift]) #Swapped deconvolved signal
lines(mids, abs(fT.shifted), type='l', col=3)
lines(mids, dnorm(mids, 26, sqrt(36-4)), col=4) #Theoretical deconvolved signal
legend(x='topright', legend=c('B', 'O', 'Estimated T', 'True T'), col=1:4, lwd=1)
 
Technology news on Phys.org
I don't know R, but it probably treats the distributions as periodic (at least the underlying mathematics does that), having an entry at position x or at x+N is the same for the algorithm.

I moved the thread to the programming section.
 
  • Like
Likes Jarvis323
Note: I've been taught multiple ways to define the Fourier transform, with factors of ##2\pi## in different places. This post uses ##F(u)=\int f(x)e^{-i2\pi ux}dx##. Depending on your convention, you may find stray factors of ##2\pi## popping up below.

Your problem is a consequence of the combination of your FFT algorithm assuming that the first point of your input array lies at x=0 and the implicit periodicity that mfb mentioned.

It's fairly easy to show that, if the Fourier transform of ##f(x)## is ##F(u)##, then the Fourier transform of ##f(x+a)## is ##e^{i2\pi au}F(u)##. Trivially, then, the inverse Fourier transform of ##e^{i2\pi au}F(u)## returns ##f(x+a)##.

If you take two functions, ##f(x+a)## and ##g(x+a)##, Fourier transform them and divide them, the complex exponential cancels out leaving just ##F(u)/G(u)##. When you inverse Fourier transform you get back ##f(x)## with ##g(x)## deconvolved out - in other words, the answer you want offset by ##a## compared to what you were expecting. So actually your negative time points have dropped off the beginning of your array. Fortunately, discrete Fourier transforms assume that their input is periodic - so the negative time points of the next iteration have appeared at the end of your array (if that makes sense).

Assuming you had ##n## negative time points, all you need to do is chop off the last ##n## points (or is it the last ##n+1##? Not sure where the zero goes any more - experiment with something you know the answer to!) of the final array and stick them on again at the beginning as, I think, you have realized. This was usually called a block-swapping operation back when I last did it (10+ years ago, so beware of language drift).
 
  • Like
Likes eoghan and Jarvis323
Alternatively, multiply the fft ratio by the factor ##e^{i 2 \pi a u}## (up to factors of ##\pm 2 \pi##).
 
  • Like
Likes Jarvis323
Thank you all for your helpful replies!
 
Just a practical comment: deconvolution using the FFT is prone to numerical instability if the input data contains noise. Solutions that look good on paper sometimes don’t work terribly well in a real-life digital computer.

There is a fantastic book that I’d highly recommend: Numerical Recipes - The Art of Scientific Computing by Press, Teukolsky, Vettering and Flannery.

Quoting directly from the section on deconvolution: “The process is generally quite sensitive to noise in the input data and the accuracy to which the response function is known. Perfectly reasonable attempts at deconvolution can sometimes produce nonsense for these reasons.” (Page 542, Second Edition, C-language version.)

The book goes on to discuss solutions to these problems and also presents alternative algorithms.
 
  • Like
Likes Fooality and mfb
I tried a web search "the loss of programming ", and found an article saying that all aspects of writing, developing, and testing software programs will one day all be handled through artificial intelligence. One must wonder then, who is responsible. WHO is responsible for any problems, bugs, deficiencies, or whatever malfunctions which the programs make their users endure? Things may work wrong however the "wrong" happens. AI needs to fix the problems for the users. Any way to...

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
19K
  • · Replies 1 ·
Replies
1
Views
6K
  • · Replies 1 ·
Replies
1
Views
4K