Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Sampling theorem with noise

  1. Aug 25, 2010 #1
    Sampling theorem says that we should sample at least at a double rate than the bandwidth of the signal : Fs>=2B.
    Ideally Fs==2B e.i. oversampling is unnecessary.
    However, there's always noise present in the signal. Hartley's theorem connects the SNR with the bandwidth and the sampling rate for a binary digital signal : http://en.wikipedia.org/wiki/Shannon–Hartley_theorem
    But I deal with an analog signal which is similar to an audio signal. What is the relation between the sampling rate and the SNR?
    Wiki about oversampling:
    "Noise reduction/cancellation. If multiple samples are taken of the same quantity with a different (and uncorrelated) random noise added to each sample, then averaging N samples reduces the noise variance (or noise power) by a factor of 1/N"
    I cann't find the proof of it.
    To precise my question, I have an oversampling signal acquiring system and the oversampling normally improves the accuracy in spectrum calculation. Matlab simulation confirms it.
    I'd like to have a mathematical proof of this.
    Thank you
  2. jcsd
  3. Aug 26, 2010 #2


    User Avatar
    Science Advisor
    Gold Member

    You have rather indiscriminantly jumbled together a bunch of separate concepts in this post. First, Shannon's channel capacity theorem usually has little to do with sampling. The theoretical maximum information transfer rate over a noisy channel is approached in practice by error correction, Viterbi and turbo coding, block interleaving, and other tools available to the communications engineer.

    Second, signal averaging is most commonly applied to repetitive signals with independent noise. A classic example is trying to detect a weak radar echo return that is below the radar receiver's thermal noise floor. If the returns from many transmit pulses are averaged, the echo signal power adds faster than the noise power and the SNR increases. The math behind should be found in books on signals and systems (I'd look in Oppenheim and Willsky, or Root and Davenport). Here is a wiki page
    http://en.wikipedia.org/wiki/Signal_averaging" [Broken]

    Are you suggesting that oversampling improves the SNR? If so, then this is not supported by theory for a properly designed sampling system. By Shannon's sampling theorem, the signal bandwidth must be limited to less than Fs/2 by an anti-aliasing filter in front of the sampler. This filter also limits the noise bandwidth, with the result that samples taken at a sampling rate greater than Fs/2 are no longer independent. Say you oversample by a factor of b then average every b samples to produce one "improved" sample. Both signal and noise are correlated over the b samples, so the SNR of the improved samples is not appreciably better. This may be demonstrated mathematically by applying the Wiener-Khinchin theorem that relates a signal's autocorrelation to its power spectrum. This theorem should be in the same books, or see Bracewell's The Fourier Transform and Its Applications (he calls it the autocorrelation theorem).

    On the other hand, you later say that "oversampling normally improves the accuracy in spectrum calculation." Spectral analysis is an entirely different topic, unrelated to the ones above dealing with SNR. Do you really mean accuracy, by the way, or resolution? If you take some time to learn about, digest and separate the different topics you have mixed up here, and come back with a single well-posed question, we'll be better able to help you.
    Last edited by a moderator: May 4, 2017
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook