B Probability of seeing peak noise in a given time window

jaydnul
Messages
558
Reaction score
15
Hi!

Say I have a electric signal that has an RMS noise value of 10uV, I would calculate peak noise by multiplying by 6.6, so 66uV. I am looking for an equation that describes the probability of seeing a noise voltage that reaches 66uV in a given viewing time window. For example if I look at the voltage signal for 20us, what is the probability of seeing 66uV?

Thanks!
 
Physics news on Phys.org
I think it depends upon the "bandwidth" of your measurement apparatus. For instance if the bandwidth is 1MHz then you are effectively taking 20 independent "samples" in 20 us. What is the probability that a single sample exceeds 6.6 σ for a (presumably) Gaussian distribution?

Consider this is from a non statistician, so corrections are invited!
 
It depends on your model for the source of the noise.

If it is completely memoryless, the noise at an instant being independent of all preceding levels, then you have an infinity of independent samples in any interval. You are guaranteed to get maximum signal in there somewhere.

In practice, noise is not like that. Any actual source of noise will have some duration. Your model could have a number, possibly infinite, of independent noise sources, each with a Poisson distribution of occurrence and some distribution of duration and amplitude (and randomly +/-). These parameters would rapidly tail off down the sequence so that the sum of the noise stays reasonable.

But do you really care about the peak across a continuous interval, or as @hutchphd suggests, only at certain instantaneous samples in the interval?

Edit:
I've thought of a model that might be tractable.
An infinite population of sources independently, with probability that one will start of ##\lambda\delta t## in each period ##\delta t##. Of those currently active, each stops with probability ##\mu\delta t## in each period ##\delta t##.
That yields a differential equation in the form of a recurrence relation. Using a generating function turns it into a PDE in two independent variables.
 
Last edited:
  • Like
Likes hutchphd and DaveE
Here's my attempt using the model I outlined:
Let ##P_n(t)## be the probability of n current sources at time t. For n>0:
##P_n(t+\delta t)=P_n(t)(1-\lambda\delta t-n\mu\delta t)+P_{n-1}(t)\lambda\delta t+P_{n+1}(t)(n+1)\mu\delta t##
and
##P_0(t+\delta t)=P_0(t)(1-\lambda\delta t)+P_{1}(t)\mu\delta t##.
Whence for n>0, in steady state:
##\dot P_n=-(\lambda+n\mu)P_n+\lambda P_{n-1}+(n+1)\mu P_{n+1}##
and
##\dot P_0=-\lambda P_0+\mu P_1##.
Using the generating function ##G(s)=\Sigma_{s=0}^\infty s^nP_n##, I get
##(1-s)G'=\sigma(1-s)G-\sigma P_0+P_1##, where ##\sigma=\lambda/\mu##.
Unfortunately, the solution appears to involve integrating a double exponential.
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top