# Oversampling and Resolution

I found online that if you want to increase the resolution of ADCs by n-bit, you need to oversample by 4^n times.

My question is, why is it not that if you want to increase by 1-bit, you need 2 times the samples and 2-bit, you need 4 and 3 - bit you need 6?

sophiecentaur
Gold Member
I have a feeling that it is to do with the fact that increasing the step size by a factor of two will increase the quantising noise (aka distortion) power by a factor of four. Hence, you need to spread that power over four times the bandwidth by sampling at four times the rate. That's where the "4" comes from.

meBigGuy
Gold Member
Saying the same thing in different words:
Doubling the sample frequency reduces the inband quantizing noise by 3dB (That is, half the quantizing noise is now above the frequency you care about). Since each bit represents 6dB, you need to double it twice to get a full bit of noise reduction in the band of interest. You then have to filter down to the original bandwith to get rid of the undesired noise and then decimate.

See http://www.atmel.com/images/doc8003.pdf

big guy is correct regarding non-noise-shaped oversampling. there are noise-shaped converters that are something like 3.something MHz sampling, have a 1-bit internal converter, and because of noise shaping, have resolution better than you would normally find with $4^n$ oversampling.

meBigGuy
Gold Member
Sigma delta converters (or delta sigma) converters essentially push the noise into a higher band. Think about a 1 bit A/D being fed a sine wave at 1/100 the the sample rate/ It has 50 samples of 1 followed by 50 samples of zero. Now, imagine we designed a converter that switched between 1 and zero very often such that the average DC value from filtering the output was closer to the actual sine wave value. That is done by integrating the total error between the input and output and deciding at every sample whether or to increase or decrease that value (trying to keep total error at zero). The result is more high frequency noise and better low frequency accuracy. There are 1st order, 2nd order, 3rd order, etc delta-sigma modulators, each puching more quantizing noise into the upper spectrum where it is filtered during decimation. The simplest (but not the optimal) filter for a sigma-delta is a count and dump that simply counts the number of 1's in your decimation period and that becomes the new sample.

Here is an example.
http://www.cds.caltech.edu/~murray/amwiki/images/8/89/Deltasigma-simulation.png

A traditional 1 bit converter would be a square wave. Think about the frequency of the quantizing noise from a simple square wave as opposed to the S/D pulse train.

Last edited:
analogdesign