How Does the Fourier Transform Handle Non-Deterministic Noise?

mnb96
Messages
711
Reaction score
5
Fourier transform of "noise"

Hello,

when we want to get the magnitude of the Fourier frequency spectrum of a function f we typically calculate F(\omega)=\int_{\mathbb{R}}f(x)e^{-i\omega x}dx
and then consider |F(\omega)|.

We can do this as long the signal (=function) is deterministic, that is, only one single known value f(x) is associated to every x.

What happens when f(x) is not deterministic anymore? In other words, we don't know what is the exact value of f(x), but we can say only that f(x) follows a certain probability density function. For example I could say that f(x) \sim \mathcal{U}(-1 , 1) which means that for a given x, f(x) is now a random variable having uniform probability distribution between -1 and 1.
If we plotted such a "function" against x we would see a noisy plot with amplitudes between -1 and 1.

I would like to calculate the magnitude of the Fourier spectrum of such a function, but I don't know from where to start. what can we say about |F(\omega)|? Any hint?
 
Last edited:
Physics news on Phys.org


Hello mnb96.

If your signal is not deterministic, then you are essentially talking about a stochastic process (also called a random process). The Fourier transform / Fourier series of a stochastic process can indeed be defined - the key is how to interpret the integral. Us engineers learn about mean-square calculus, and interpret the integral in the mean square sense. One thing you can immediately do is say things about expectations of the Fourier transform. For example:

<br /> E\left[ F(\omega) \right] =\int_{\mathbb{R}} E\left[f(x) \right]e^{-i\omega x}dx<br />

The second order moment can be related to the Fourier transform of the autocorrelation function.

I won't attempt to provide a detailed discussion here (I admit I am a bit rusty since I live in the discrete world for the most part ...), but I would look in the online books by Hajek (exploratoin of random processes ...) and Gray (statistical signal processing) as a possible place to start. I learned about this from Papoulis (probability, random variables and stochastic processes) which was the standard electrical engineering book when I was in school. The terms below should get you going on google:

The Fourier xform/series is usually called the "spectral representation" of the random process. A generalization of the Fourier series where you use arbitrary orthogonal functions is called the "Karhunen-Loeve expansion", and is related to principle component analysis.

I hope that helps a little!

Jason
 


If you are talking about "white noise", the Fourier transform is a constant over a finite range of \omega.
 
Last edited:


Hi jasonRF!

thanks a lot! Your explanation was really helpful! In fact, you put me in the right direction by pointing out that what we can do, is simply to compute the expected value of F(ω) or analogously the variance of F(ω) (which I was able to derive myself, finding the relationship you mentioned with the autocorrelation).

I started to play around with this and unless I did some mistake in my proof, I also noticed that, if f(x) is a random process, then we have an equivalent of Parseval's theorem:

E \left[ \left\| f \right\|^2 \right] = E \left[ \left\| F \right\|^2 \right]

where E \left[ \left\| f \right\|^2 \right] = E \int_{-\infty}^{+\infty} |f(x)|^2 dx, is the expected \ell_2-norm of f. Is this correct?

However, for the above to be valid, I had to assume that the random process f takes place on a finite range of x, otherwise the expected norm goes to infinity.
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Namaste & G'day Postulate: A strongly-knit team wins on average over a less knit one Fundamentals: - Two teams face off with 4 players each - A polo team consists of players that each have assigned to them a measure of their ability (called a "Handicap" - 10 is highest, -2 lowest) I attempted to measure close-knitness of a team in terms of standard deviation (SD) of handicaps of the players. Failure: It turns out that, more often than, a team with a higher SD wins. In my language, that...

Similar threads

Back
Top