Fourier Transform of a Probability Distribution

Click For Summary
SUMMARY

The discussion centers on the relationship between the Fourier transform of a probability distribution and the Central Limit Theorem (CLT). Participants explore how the Fourier transform of a zero-mean probability density function (pdf), when raised to a large power, approaches a Gaussian distribution. Key points include the properties of convolution in the original domain corresponding to multiplication in the Fourier domain, and the implications of the CLT on the behavior of various pdfs. The conclusion asserts that the Fourier transform of a zero-mean pdf raised to a high power will approximate a Gaussian distribution, supported by graphical evidence.

PREREQUISITES
  • Understanding of the Central Limit Theorem (CLT)
  • Knowledge of Fourier transforms and their properties
  • Familiarity with probability density functions (pdfs)
  • Basic concepts of convolution in probability theory
NEXT STEPS
  • Study the properties of the Fourier transform of various probability distributions
  • Explore graphical convolution techniques for visualizing pdf transformations
  • Investigate the implications of the Central Limit Theorem on different types of pdfs
  • Learn about the relationship between Fourier transforms and moment generating functions
USEFUL FOR

Mathematicians, statisticians, data scientists, and anyone interested in the applications of Fourier transforms in probability theory and statistical analysis.

NatanijelVasic
Messages
19
Reaction score
5
Hi all :oldbiggrin:

Yesterday I was thinking about the central limit theorem, and in doing so, I reached a conclusion that I found surprising. It could just be that my arguments are wrong, but this was my process:

1. First, define a continuous probability distribution X.

2. Define a new random variable y = x1 + x2 + x3 + ... (Y is the sum of many independent samples of X).

3. To visualise the distribution of Y as you sum each term, I imaging starting with the pdf of X, and convolving it with the same pdf. Then I take the result, and convolve it again with the pdf of X, and so on. According to the central limit theorem, each time you convolve the result again with the pdf of X, the new result will look more like the normal distribution, and this is the case (I have visualised this myself with graphical convolution animations).

4. It was at this point that I realized that the Fourier transform of a Gaussian is also a Gaussian.

5. I also recalled that convolution in the original domain (i.e. the operations we performed in 3) is the equivalent of multiplication in the Fourier domain.

6. As a result of points 4 and 5, that implies that the Fourier transform of pdf of X, raised to a large power, will approximate a normal distribution.

What I find surprising about this conclusion is that it is possible to create almost any pdf of your choosing, even a randomly drawn squiggle that is positive and integrates to 1, and its FT^n (where n is large) will always be approximately Gaussian. Is this correct? I am trying to find an intuitive explanation for this.

Thanks for the help!

Nat
 
  • Like
Likes   Reactions: tnich
Physics news on Phys.org
The Fourier transform of a Gaussian distribution is the characteristic function ##\exp(i \mu t - \frac {\sigma^2 t^2}2)##, which resembles a Gaussian distribution, but differs from it in a couple of significant ways. First, it has an imaginary component, so it lies in the complex plane. Second, it is not properly normalized, so it does not integrate to 1. You could avoid the first problem by using the Laplace transform (resulting in the moment generating function ##\exp(\mu t + \frac {\sigma^2 t^2}2)##), but in this case the coefficient of ##t^2## is positive, not negative. Perhaps if you can resolve these difficulties, you can check whether your intuition is correct.
 
  • Like
Likes   Reactions: hutchphd
@tnich Thanks for the reply! :)

I have thought about these two point, but I would still say that my conclusion is surprising. Let me address your points:

First, it has an imaginary component, so it lies in the complex plane

Yes, this would be the case with a non-zero mean Gaussian, or any function that is shifted (as this corresponds to a phase shift term in the Fourier domain, as your equation suggests). I should have stated, for simplicity, that the X distribution has to be zero mean, and this is indeed what I had in my head, but I failed to communicate that. In the zero-mean case, the functions in both domains should be real-valued Gaussians centred at 0.

Second, it is not properly normalized, so it does not integrate to 1

For the zero-mean case, it doesn't matter so much if the function doesn't integrate to 1, as long as it approximates a Gaussian shape.So it still seems to me like it is possible to draw any zero-mean pdf, and the (FT(pdf))^n will still approximate a real valued, zero mean Gaussian
 
Last edited:
OK. Let's try a uniform distribution with pdf
$$f(x) =
\begin{cases} 0 & \text{if } x < 0
\\ 1 & \text{if } 0 \leq x \leq 1
\\ 0 & \text{if } x > 1 \end{cases}$$
Then ##f^n(x) = f(x)##

Another case is an exponential distribution with pdf

$$g(x) =
\begin{cases} 0 & \text{if } x < 0
\\ {\exp(-x)} & \text{if } x\geq 0
\end{cases}$$

Then $$g^n(x) =
\begin{cases} 0 & \text{if } x < 0
\\ {\exp(-nx)} & \text{if } x\geq 0
\end{cases}$$

Neither of these functions approach a Gaussian form when n is large.
 
Neither of these functions approach a Gaussian form when n is large.

Perhaps I didn't state my hypothesis as well as I should have. It's the Fourier transform of zero-mean pdf, raised to large n, that will approximate a Gaussian. That is:

## \lim_{n \rightarrow \infty}\left( \mathscr{F}\left[f(x)\right]\right)^{n} \approx c_{0}e^{-c_{1}\omega^{2}}##

We are convolving the PDFs in the original domain to find the new distribution of the summed variables. This would be the equivalent operation of multiplying the Fourier transforms of the PDF in the frequency domain.
(a modified version of my hypothesis works will non-zero mean PDFs as well, but in this case we look at the real power spectrum instead of complex FT - we'll ignore this for now for simplicity, and only consider zero-mean pdf's).

In the case of a zero mean uniform distribution, the Fourier transform is a real valued sinc function. When the sinc function is raised to a large power of n, the result is almost exactly Gaussian (I didn't prove this analytically, but I plotted both).
 
NatanijelVasic said:
Perhaps I didn't state my hypothesis as well as I should have. It's the Fourier transform of zero-mean pdf, raised to large n, that will approximate a Gaussian. That is:

## \lim_{n \rightarrow \infty}\left( \mathscr{F}\left[f(x)\right]\right)^{n} \approx c_{0}e^{-c_{1}\omega^{2}}##

We are convolving the PDFs in the original domain to find the new distribution of the summed variables. This would be the equivalent operation of multiplying the Fourier transforms of the PDF in the frequency domain.
(a modified version of my hypothesis works will non-zero mean PDFs as well, but in this case we look at the real power spectrum instead of complex FT - we'll ignore this for now for simplicity, and only consider zero-mean pdf's).

In the case of a zero mean uniform distribution, the Fourier transform is a real valued sinc function. When the sinc function is raised to a large power of n, the result is almost exactly Gaussian (I didn't prove this analytically, but I plotted both).
Yes, I realized after I sent my reply that you meant the nth power of Fourier transform of the pdf. The ##sinc(s)## function, though, has zeroes at regular intervals. Wouldn't ##sync^n(s)## still have zeros at the same points?
 
  • Like
Likes   Reactions: hutchphd
tnich said:
Yes, I realized after I sent my reply that you meant the nth power of Fourier transform of the pdf. The ##sinc(s)## function, though, has zeroes at regular intervals. Wouldn't ##sync^n(s)## still have zeros at the same points?

Yes, this is certainly the case. However, the limiting behaviour takes care of this; as n gets large, the resulting Gaussian approximation gets "squashed" towards to centre, and so the first zero crossing actually occurs at some extreme standard deviation, where the exact Gaussian is almost 0 anyway (and the error at all the crossings approaches zero as n goes to infinity).

NOTE: In the plots, the approximation overlays the actual Gaussian, and only appears as one black plot, when if fact there are two black plots.
 

Attachments

  • Screenshot 2019-04-02 at 20.31.19.png
    Screenshot 2019-04-02 at 20.31.19.png
    11.5 KB · Views: 573
  • Screenshot 2019-04-02 at 20.23.31.png
    Screenshot 2019-04-02 at 20.23.31.png
    18.3 KB · Views: 553
OK.
Now that I understand what you are trying to say, I agree with your conclusion. I think it is implied by the Central Limit Theorem.
 
tnich said:
OK.
Now that I understand what you are trying to say, I agree with your conclusion. I think it is implied by the Central Limit Theorem.
So you can use the assumptions of the CLT to specify what kinds of functions raised to the nth power converge to a Gaussian form. The CLT assumes that the area under the pdf curve ##f(t)## is 1, and that the mean and variance are both finite. So the Fourier transform ##F(s)## of such a function would have ##F(0)=1##, F'(0) and F''(0) both finite, and F''(0) < 0. That means that the value of ##F(s)## at ##s = 0## is a local maximum.

The inverse Fourier transform is itself a Fourier transform, so the area under ##F(s)## must be finite because f(0) is finite. This starts to look like a curve that could behave the way you want it to.
 
  • Like
Likes   Reactions: NatanijelVasic
  • #10
tnich said:
So you can use the assumptions of the CLT to specify what kinds of functions raised to the nth power converge to a Gaussian form. The CLT assumes that the area under the pdf curve ##f(t)## is 1, and that the mean and variance are both finite. So the Fourier transform ##F(s)## of such a function would have ##F(0)=1##, F'(0) and F''(0) both finite, and F''(0) < 0. That means that the value of ##F(s)## at ##s = 0## is a local maximum.

The inverse Fourier transform is itself a Fourier transform, so the area under ##F(s)## must be finite because f(0) is finite. This starts to look like a curve that could behave the way you want it to.
Forgot to say that since we are assuming the mean is zero, ##F'(0) = 0##.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
892
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K