# Fourier transform of noise

Fourier transform of "noise"

Hello,

when we want to get the magnitude of the Fourier frequency spectrum of a function f we typically calculate $$F(\omega)=\int_{\mathbb{R}}f(x)e^{-i\omega x}dx$$
and then consider $|F(\omega)|$.

We can do this as long the signal (=function) is deterministic, that is, only one single known value f(x) is associated to every x.

What happens when f(x) is not deterministic anymore? In other words, we don't know what is the exact value of f(x), but we can say only that f(x) follows a certain probability density function. For example I could say that $$f(x) \sim \mathcal{U}(-1 , 1)$$ which means that for a given x, f(x) is now a random variable having uniform probability distribution between -1 and 1.
If we plotted such a "function" against x we would see a noisy plot with amplitudes between -1 and 1.

I would like to calculate the magnitude of the Fourier spectrum of such a function, but I don't know from where to start. what can we say about $|F(\omega)|$? Any hint?

Last edited:

jasonRF
Gold Member

Hello mnb96.

If your signal is not deterministic, then you are essentially talking about a stochastic process (also called a random process). The Fourier transform / Fourier series of a stochastic process can indeed be defined - the key is how to interpret the integral. Us engineers learn about mean-square calculus, and interpret the integral in the mean square sense. One thing you can immediately do is say things about expectations of the Fourier transform. For example:

$$E\left[ F(\omega) \right] =\int_{\mathbb{R}} E\left[f(x) \right]e^{-i\omega x}dx$$

The second order moment can be related to the Fourier transform of the autocorrelation function.

I won't attempt to provide a detailed discussion here (I admit I am a bit rusty since I live in the discrete world for the most part ...), but I would look in the online books by Hajek (exploratoin of random processes ...) and Gray (statistical signal processing) as a possible place to start. I learned about this from Papoulis (probability, random variables and stochastic processes) which was the standard electrical engineering book when I was in school. The terms below should get you going on google:

The Fourier xform/series is usually called the "spectral representation" of the random process. A generalization of the Fourier series where you use arbitrary orthogonal functions is called the "Karhunen-Loeve expansion", and is related to principle component analysis.

I hope that helps a little!

Jason

If you are talking about "white noise", the Fourier transform is a constant over a finite range of $\omega$.

Last edited:

Hi jasonRF!

thanks a lot! Your explanation was really helpful! In fact, you put me in the right direction by pointing out that what we can do, is simply to compute the expected value of F(ω) or analogously the variance of F(ω) (which I was able to derive myself, finding the relationship you mentioned with the autocorrelation).

I started to play around with this and unless I did some mistake in my proof, I also noticed that, if f(x) is a random process, then we have an equivalent of Parseval's theorem:

$$E \left[ \left\| f \right\|^2 \right] = E \left[ \left\| F \right\|^2 \right]$$

where $E \left[ \left\| f \right\|^2 \right] = E \int_{-\infty}^{+\infty} |f(x)|^2 dx$, is the expected $\ell_2$-norm of f. Is this correct?

However, for the above to be valid, I had to assume that the random process f takes place on a finite range of x, otherwise the expected norm goes to infinity.