Fourier Transform of a Probability Distribution

In summary: I'm not sure what you mean. Obviously not the nth power of the Fourier transform, because that would be just the Fourier transform of the nth power of the pdf. But there is no convolution involved in that. Do you mean the nth power of the Fourier transform of the pdf convolved with itself n times?In summary, the central limit theorem states that the sum of many independent samples of a continuous probability distribution will approximate a normal distribution. This can be visualized by convolving the pdf of the original distribution with itself multiple times, with each convolution resulting in a distribution that resembles a normal distribution more closely. Additionally, the Fourier transform of a Gaussian distribution is also a Gaussian, and convolution in the original domain is equivalent to multiplication
  • #1
NatanijelVasic
19
5
Hi all :oldbiggrin:

Yesterday I was thinking about the central limit theorem, and in doing so, I reached a conclusion that I found surprising. It could just be that my arguments are wrong, but this was my process:

1. First, define a continuous probability distribution X.

2. Define a new random variable y = x1 + x2 + x3 + ... (Y is the sum of many independent samples of X).

3. To visualise the distribution of Y as you sum each term, I imaging starting with the pdf of X, and convolving it with the same pdf. Then I take the result, and convolve it again with the pdf of X, and so on. According to the central limit theorem, each time you convolve the result again with the pdf of X, the new result will look more like the normal distribution, and this is the case (I have visualised this myself with graphical convolution animations).

4. It was at this point that I realized that the Fourier transform of a Gaussian is also a Gaussian.

5. I also recalled that convolution in the original domain (i.e. the operations we performed in 3) is the equivalent of multiplication in the Fourier domain.

6. As a result of points 4 and 5, that implies that the Fourier transform of pdf of X, raised to a large power, will approximate a normal distribution.

What I find surprising about this conclusion is that it is possible to create almost any pdf of your choosing, even a randomly drawn squiggle that is positive and integrates to 1, and its FT^n (where n is large) will always be approximately Gaussian. Is this correct? I am trying to find an intuitive explanation for this.

Thanks for the help!

Nat
 
  • Like
Likes tnich
Physics news on Phys.org
  • #2
The Fourier transform of a Gaussian distribution is the characteristic function ##\exp(i \mu t - \frac {\sigma^2 t^2}2)##, which resembles a Gaussian distribution, but differs from it in a couple of significant ways. First, it has an imaginary component, so it lies in the complex plane. Second, it is not properly normalized, so it does not integrate to 1. You could avoid the first problem by using the Laplace transform (resulting in the moment generating function ##\exp(\mu t + \frac {\sigma^2 t^2}2)##), but in this case the coefficient of ##t^2## is positive, not negative. Perhaps if you can resolve these difficulties, you can check whether your intuition is correct.
 
  • Like
Likes hutchphd
  • #3
@tnich Thanks for the reply! :)

I have thought about these two point, but I would still say that my conclusion is surprising. Let me address your points:

First, it has an imaginary component, so it lies in the complex plane

Yes, this would be the case with a non-zero mean Gaussian, or any function that is shifted (as this corresponds to a phase shift term in the Fourier domain, as your equation suggests). I should have stated, for simplicity, that the X distribution has to be zero mean, and this is indeed what I had in my head, but I failed to communicate that. In the zero-mean case, the functions in both domains should be real-valued Gaussians centred at 0.

Second, it is not properly normalized, so it does not integrate to 1

For the zero-mean case, it doesn't matter so much if the function doesn't integrate to 1, as long as it approximates a Gaussian shape.So it still seems to me like it is possible to draw any zero-mean pdf, and the (FT(pdf))^n will still approximate a real valued, zero mean Gaussian
 
Last edited:
  • #4
OK. Let's try a uniform distribution with pdf
$$f(x) =
\begin{cases} 0 & \text{if } x < 0
\\ 1 & \text{if } 0 \leq x \leq 1
\\ 0 & \text{if } x > 1 \end{cases}$$
Then ##f^n(x) = f(x)##

Another case is an exponential distribution with pdf

$$g(x) =
\begin{cases} 0 & \text{if } x < 0
\\ {\exp(-x)} & \text{if } x\geq 0
\end{cases}$$

Then $$g^n(x) =
\begin{cases} 0 & \text{if } x < 0
\\ {\exp(-nx)} & \text{if } x\geq 0
\end{cases}$$

Neither of these functions approach a Gaussian form when n is large.
 
  • #5
Neither of these functions approach a Gaussian form when n is large.

Perhaps I didn't state my hypothesis as well as I should have. It's the Fourier transform of zero-mean pdf, raised to large n, that will approximate a Gaussian. That is:

## \lim_{n \rightarrow \infty}\left( \mathscr{F}\left[f(x)\right]\right)^{n} \approx c_{0}e^{-c_{1}\omega^{2}}##

We are convolving the PDFs in the original domain to find the new distribution of the summed variables. This would be the equivalent operation of multiplying the Fourier transforms of the PDF in the frequency domain.
(a modified version of my hypothesis works will non-zero mean PDFs as well, but in this case we look at the real power spectrum instead of complex FT - we'll ignore this for now for simplicity, and only consider zero-mean pdf's).

In the case of a zero mean uniform distribution, the Fourier transform is a real valued sinc function. When the sinc function is raised to a large power of n, the result is almost exactly Gaussian (I didn't prove this analytically, but I plotted both).
 
  • #6
NatanijelVasic said:
Perhaps I didn't state my hypothesis as well as I should have. It's the Fourier transform of zero-mean pdf, raised to large n, that will approximate a Gaussian. That is:

## \lim_{n \rightarrow \infty}\left( \mathscr{F}\left[f(x)\right]\right)^{n} \approx c_{0}e^{-c_{1}\omega^{2}}##

We are convolving the PDFs in the original domain to find the new distribution of the summed variables. This would be the equivalent operation of multiplying the Fourier transforms of the PDF in the frequency domain.
(a modified version of my hypothesis works will non-zero mean PDFs as well, but in this case we look at the real power spectrum instead of complex FT - we'll ignore this for now for simplicity, and only consider zero-mean pdf's).

In the case of a zero mean uniform distribution, the Fourier transform is a real valued sinc function. When the sinc function is raised to a large power of n, the result is almost exactly Gaussian (I didn't prove this analytically, but I plotted both).
Yes, I realized after I sent my reply that you meant the nth power of Fourier transform of the pdf. The ##sinc(s)## function, though, has zeroes at regular intervals. Wouldn't ##sync^n(s)## still have zeros at the same points?
 
  • Like
Likes hutchphd
  • #7
tnich said:
Yes, I realized after I sent my reply that you meant the nth power of Fourier transform of the pdf. The ##sinc(s)## function, though, has zeroes at regular intervals. Wouldn't ##sync^n(s)## still have zeros at the same points?

Yes, this is certainly the case. However, the limiting behaviour takes care of this; as n gets large, the resulting Gaussian approximation gets "squashed" towards to centre, and so the first zero crossing actually occurs at some extreme standard deviation, where the exact Gaussian is almost 0 anyway (and the error at all the crossings approaches zero as n goes to infinity).

NOTE: In the plots, the approximation overlays the actual Gaussian, and only appears as one black plot, when if fact there are two black plots.
 

Attachments

  • Screenshot 2019-04-02 at 20.31.19.png
    Screenshot 2019-04-02 at 20.31.19.png
    11.5 KB · Views: 490
  • Screenshot 2019-04-02 at 20.23.31.png
    Screenshot 2019-04-02 at 20.23.31.png
    18.3 KB · Views: 480
  • #8
OK.
Now that I understand what you are trying to say, I agree with your conclusion. I think it is implied by the Central Limit Theorem.
 
  • #9
tnich said:
OK.
Now that I understand what you are trying to say, I agree with your conclusion. I think it is implied by the Central Limit Theorem.
So you can use the assumptions of the CLT to specify what kinds of functions raised to the nth power converge to a Gaussian form. The CLT assumes that the area under the pdf curve ##f(t)## is 1, and that the mean and variance are both finite. So the Fourier transform ##F(s)## of such a function would have ##F(0)=1##, F'(0) and F''(0) both finite, and F''(0) < 0. That means that the value of ##F(s)## at ##s = 0## is a local maximum.

The inverse Fourier transform is itself a Fourier transform, so the area under ##F(s)## must be finite because f(0) is finite. This starts to look like a curve that could behave the way you want it to.
 
  • Like
Likes NatanijelVasic
  • #10
tnich said:
So you can use the assumptions of the CLT to specify what kinds of functions raised to the nth power converge to a Gaussian form. The CLT assumes that the area under the pdf curve ##f(t)## is 1, and that the mean and variance are both finite. So the Fourier transform ##F(s)## of such a function would have ##F(0)=1##, F'(0) and F''(0) both finite, and F''(0) < 0. That means that the value of ##F(s)## at ##s = 0## is a local maximum.

The inverse Fourier transform is itself a Fourier transform, so the area under ##F(s)## must be finite because f(0) is finite. This starts to look like a curve that could behave the way you want it to.
Forgot to say that since we are assuming the mean is zero, ##F'(0) = 0##.
 

1. What is the Fourier transform of a probability distribution?

The Fourier transform of a probability distribution is a mathematical operation that decomposes a probability distribution into its constituent frequencies. It is used to analyze the frequency components of a probability distribution and is a powerful tool in signal processing and data analysis.

2. How is the Fourier transform of a probability distribution calculated?

The Fourier transform of a probability distribution is calculated using an integral formula that involves the probability distribution function and a complex exponential function. This integral is solved using mathematical techniques such as integration by parts or substitution.

3. What is the significance of the Fourier transform of a probability distribution?

The Fourier transform of a probability distribution allows us to understand the frequency components of a probability distribution, which can reveal patterns and trends in the data. It is also used in applications such as image processing, audio signal analysis, and data compression.

4. Can the Fourier transform of a probability distribution be applied to all types of data?

Yes, the Fourier transform of a probability distribution can be applied to any type of data that can be represented as a probability distribution. This includes continuous and discrete data, as well as data from various fields such as physics, engineering, and economics.

5. Are there any limitations to using the Fourier transform of a probability distribution?

One limitation of the Fourier transform of a probability distribution is that it assumes the data is stationary, meaning that the underlying probability distribution does not change over time. It may also be affected by outliers or extreme values in the data, which can skew the results. Additionally, the Fourier transform may not be suitable for non-linear or non-stationary data.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
743
  • Set Theory, Logic, Probability, Statistics
Replies
30
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
2K
  • Linear and Abstract Algebra
Replies
3
Views
925
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
4K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
338
Replies
1
Views
747
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
Back
Top