# Oversampling and Resolution

• quantumlight
In summary, the reason you need 4 times the samples to increase the resolution of ADCs by n-bit is because increasing the step size by a factor of two will increase the quantising noise (aka distortion) power by a factor of four.

#### quantumlight

I found online that if you want to increase the resolution of ADCs by n-bit, you need to oversample by 4^n times.

My question is, why is it not that if you want to increase by 1-bit, you need 2 times the samples and 2-bit, you need 4 and 3 - bit you need 6?

I have a feeling that it is to do with the fact that increasing the step size by a factor of two will increase the quantising noise (aka distortion) power by a factor of four. Hence, you need to spread that power over four times the bandwidth by sampling at four times the rate. That's where the "4" comes from.

Saying the same thing in different words:
Doubling the sample frequency reduces the inband quantizing noise by 3dB (That is, half the quantizing noise is now above the frequency you care about). Since each bit represents 6dB, you need to double it twice to get a full bit of noise reduction in the band of interest. You then have to filter down to the original bandwith to get rid of the undesired noise and then decimate.

See http://www.atmel.com/images/doc8003.pdf

big guy is correct regarding non-noise-shaped oversampling. there are noise-shaped converters that are something like 3.something MHz sampling, have a 1-bit internal converter, and because of noise shaping, have resolution better than you would normally find with $4^n$ oversampling.

Sigma delta converters (or delta sigma) converters essentially push the noise into a higher band. Think about a 1 bit A/D being fed a sine wave at 1/100 the the sample rate/ It has 50 samples of 1 followed by 50 samples of zero. Now, imagine we designed a converter that switched between 1 and zero very often such that the average DC value from filtering the output was closer to the actual sine wave value. That is done by integrating the total error between the input and output and deciding at every sample whether or to increase or decrease that value (trying to keep total error at zero). The result is more high frequency noise and better low frequency accuracy. There are 1st order, 2nd order, 3rd order, etc delta-sigma modulators, each puching more quantizing noise into the upper spectrum where it is filtered during decimation. The simplest (but not the optimal) filter for a sigma-delta is a count and dump that simply counts the number of 1's in your decimation period and that becomes the new sample.

Here is an example.
http://www.cds.caltech.edu/~murray/amwiki/images/8/89/Deltasigma-simulation.png

A traditional 1 bit converter would be a square wave. Think about the frequency of the quantizing noise from a simple square wave as opposed to the S/D pulse train.

Last edited:
People often lump delta-sigma modulation together with oversampling but they are distinct, as meBigGuy and rbj. There are many cases in practice where oversampling is used without noise shaping (although I can't imagine a case where noise shaping would be useful without oversamping...)

analogdesign said:
People often lump delta-sigma modulation together with oversampling but they are distinct, as meBigGuy and rbj. There are many cases in practice where oversampling is used without noise shaping (although I can't imagine a case where noise shaping would be useful without oversamping...)

often we use noise shaping without oversampling is used to steer quantization error from where you don't want it to where you can tolerate it, even if it is not out-of-band.

http://www.dspguru.com/dsp/tricks/fixed-point-dc-blocking-filter-with-noise-shaping

i've also done this for regular IIR filtering of audio data in fixed point, say, with the old Motorola 56K.

## 1. What is oversampling and how does it affect resolution?

Oversampling is the process of acquiring more data points than necessary for a given signal or image. It can improve resolution by capturing more detailed information about the signal or image.

## 2. Why is oversampling important in scientific research?

Oversampling is important in scientific research because it allows for more accurate and precise measurements. By capturing more data points, researchers can reduce the effects of noise and increase the resolution of their data.

## 3. What is the relationship between oversampling and signal-to-noise ratio?

Oversampling can improve the signal-to-noise ratio by reducing the effects of noise on the signal. By capturing more data points, the signal is amplified while the noise remains relatively constant, resulting in a higher signal-to-noise ratio.

## 4. What are some techniques for oversampling in scientific experiments?

Some techniques for oversampling in scientific experiments include increasing the sampling rate, using multiple detectors, and averaging multiple measurements. These techniques can help to capture more data points and improve the resolution of the final results.

## 5. Are there any drawbacks to oversampling in scientific experiments?

One potential drawback of oversampling is the cost and time required to process and analyze the increased amount of data. Additionally, oversampling may not always result in significant improvements in resolution, and in some cases, it may even introduce artifacts or distortions in the data.