Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I Effect of a Moving Average on Gaussian Noise

  1. Aug 9, 2016 #1

    Twigg

    User Avatar
    Gold Member

    Hi all,

    I think there is a really obvious answer to this, but I just don't see it yet. Suppose you had N data sets that all measured the same quantity as a function of time. Each data set shows the same signal plus a random noise component which is normally distributed about the signal with a constant standard deviation. If you were to take an average over the N data sets, you would expect to see the same signal reduced noise. If I'm not mistaken, the more data sets averaged, the lower the resulting noise. What would the standard deviation of the noise be in the averaged data? Thanks!
     
  2. jcsd
  3. Aug 9, 2016 #2

    BvU

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    IF the signals correlate and the noise doesn't, you expect an improvement of the signal to noise ratio by a factor of ##\ 1/\sqrt N\ ## with ##N## the number of data sets.
     
  4. Aug 9, 2016 #3

    Twigg

    User Avatar
    Gold Member

    That makes sense I think. Let me know if I have this incorrectly:

    For each of the N data sets, the noise is Poisson distributed at each time with a sample size of 1, which has standard deviation ##\sigma = \sqrt{1} = 1##. For the average of the N data sets, the noise is Poisson distributed at each time with a sample size of N, which has a standard deviation of ##\sigma = \sqrt{N}##. That gives the factor of ##\frac{1}{\sqrt{N}}## in SNR.
     
  5. Aug 9, 2016 #4

    BvU

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Didn't the thread mention Gaussian noise ? Anyway, key is: the noise is supposed to be uncorrelated, so it adds up squared, and the signal adds up direct. Divide by ##N## to average and the signal/noise goes like ##N/\sqrt N##.
     
  6. Aug 9, 2016 #5

    Twigg

    User Avatar
    Gold Member

    I appreciate the correction!! I clearly need to review and practice more.

    Since I've already made a fool of myself once, can I ask you to verify that I've got it this time? If ##u_n(t)## is the noise component at time t in the n-th data set and ##\sigma## is the standard deviation of the noise in the Gaussian, then the standard deviation of the noise in the averaged data set is going to be ##\sigma_{avg} = \sqrt{<u^2(t)>} = \frac{\sqrt{\sum_{n=1}^{N} <u_{n}^2(t)>}}{N} = \frac{\sqrt{N*\sigma}}{N} = \frac{\sigma}{\sqrt{N}} ##? So SNR improves with ##\sqrt{N}##.

    And from your last post, am I right to think that it doesn't matter if the noise is Gaussian distributed (e.g., Johnson-Nyquist noise) or Poissonian (e.g., shot noise), so long as it is uncorrelated and that the value of ##\sigma## doesn't change in time?
     
  7. Aug 14, 2016 #6

    chiro

    User Avatar
    Science Advisor

    Hey Twigg.

    Just construct the test statistic and then get a variance for it and see what it is as a function of your sample.

    More information should always reduce the variance (via things like the Cramer-Rao bound) and when it comes to relating the noise to the assessment of what the signal is saying, if the signal has enough redundancy and you get enough actual information then statistically (at least) the noise probabilities (i.e. - the probability that the signal being assessed as a function of all data values) will be so low that there will be a high degree of confidence that whatever is inferred is likely to be what information was communicated.

    This idea is found in things like probabilistic prime testing (like Rabin-Miller testing).
     
  8. Aug 14, 2016 #7

    Svein

    User Avatar
    Science Advisor

    A moving average is a filter, more specifically a low-pass filter. You can get the same effect using a digital low-pass filter:

    Assume that you take the moving average over N samples. Express this as [itex] y_{m}=\frac{1}{N}\sum_{i=m-N}^{m}x_{i}[/itex]. Then [itex]y_{m+1}=y_{m}+\frac{1}{N}x_{m+1}-\frac{1}{N}x_{m-N} [/itex]. From the formula for ym you have [itex]x_{m-N}\approx \frac{1}{N}y_{m} [/itex]. Therefore a good approximation to a moving average over N samples is [itex]y_{m+1}=\frac{N-1}{N}y_{m}+\frac{1}{N}x_{m+1} [/itex]...
     
  9. Aug 15, 2016 #8

    Twigg

    User Avatar
    Gold Member

    Thanks all for the replies!

    I'm not sure I fully understand, as my statistics knowledge is lacking and I'm not familiar with the lingo. I think this is what I tried to show in my last post. I record the variance in each of the N measurements, and then take the average variance of the sample. If the signal is the same in each measurement (which I assumed), the variance is just the time-average of the square of the noise. Assuming the noise doesn't get bigger or smaller between measurements, the variance in each measurement should be roughly constant. By that argument, I thought the noise standard deviation after N averages would be less by a factor of ##\sqrt{N}##. Does that make sense? Intuitively I understand why averaging reduces noise levels, it's the quantitative analysis (how much does the noise get reduced) that I'm having more difficulty with.

    @Svein I think I didn't explain the problem well. The "moving average" I meant isn't temporal, it's over separate data sets each taken over time. For example, if I had a DC power supply which produced 3V DC plus 10mV of noise, and I saved voltage-vs-time data on an oscilloscope over 10 seconds, and repeated this 100 times. I'm asking about averaging over the 100 different 10s-long traces, not a moving average of the data in each 10s trace. Sorry for the confusion.
     
  10. Aug 16, 2016 #9

    chiro

    User Avatar
    Science Advisor

    A test statistic is technically just a function of a set of random variables - like a normal function except each variable is a random variable and not a deterministic one.

    You have a moving average or some other thing thing you are trying to estimate (i.e. you are estimating something that is a function of your random variables but is constant regardless of them as the distribution you are looking at is also supposed to be the same for all of the sample elements) and you construct what is called an estimator.

    An estimator is something that is used to get the distribution of the parameter you are trying to actually find.

    Usually it's a mean, standard deviation, variance, median or something of that nature.

    If it's an expectation then the standard statistical trick for large samples is via the Central Limit Theorem and its analogues. This means that if you find expectations of sums (or even means) where everything is IID (independent and identically distributed) then the distribution becomes normal.

    When you do the expectation and variance calculations on this, you find that the standard deviation is sigma/sqrt(n) and the mean is just the mean of the sample. You usually have to estimate sigma using the sample variance and the sample mean is your estimate for your actual mean.

    This is used to get point estimates and confidence intervals.

    I'd strongly suggest you pick up an introductory textbook on statistical inference (which is the body of knowledge concerning this sort of thing) and look at the Maximum Likelihood Estimator technique to get an idea of how they are constructed.

    That way - when you look at the formulas they can be followed rather than taking everything completely "on faith".
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Effect of a Moving Average on Gaussian Noise
  1. Moving Averages (Replies: 3)

Loading...