Why is oversampling by 4^n times necessary for increasing ADC resolution?

  • Thread starter Thread starter quantumlight
  • Start date Start date
  • Tags Tags
    Resolution
Click For Summary

Discussion Overview

The discussion revolves around the necessity of oversampling by 4^n times to increase the resolution of Analog-to-Digital Converters (ADCs) by n bits. Participants explore the relationship between oversampling, quantizing noise, and the effects on signal fidelity, touching on concepts such as noise shaping and delta-sigma modulation.

Discussion Character

  • Technical explanation
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • One participant questions the rationale behind the 4^n oversampling requirement, suggesting that a linear increase in bits might imply a simpler relationship with sample counts.
  • Another participant proposes that increasing the step size by a factor of two raises quantizing noise power by a factor of four, necessitating four times the sampling rate to manage this noise.
  • A different viewpoint reiterates that doubling the sample frequency reduces inband quantizing noise by 3dB, indicating that two doublings are required to achieve a full bit of noise reduction.
  • One participant mentions that noise-shaped converters can achieve better resolution than what 4^n oversampling would suggest, highlighting the role of noise shaping in ADC performance.
  • Another participant explains the operation of sigma-delta converters, detailing how they manage quantizing noise by pushing it into higher frequencies while maintaining low-frequency accuracy through integration of error.
  • There is a discussion on the distinction between delta-sigma modulation and oversampling, with participants noting that oversampling can occur without noise shaping, although the two are often conflated.
  • One participant points out that noise shaping can be applied without oversampling to direct quantization error to more tolerable regions, suggesting flexibility in the application of these techniques.

Areas of Agreement / Disagreement

Participants express differing views on the necessity and implications of oversampling by 4^n times, with some supporting the established relationship while others challenge or refine the understanding of noise shaping and its effects on ADC resolution. No consensus is reached on the optimal approach or the necessity of the 4^n oversampling rule.

Contextual Notes

Participants discuss various assumptions regarding quantizing noise, the effects of sampling rates, and the role of noise shaping, but these assumptions remain unresolved and depend on specific definitions and contexts.

quantumlight
Messages
23
Reaction score
0
I found online that if you want to increase the resolution of ADCs by n-bit, you need to oversample by 4^n times.

My question is, why is it not that if you want to increase by 1-bit, you need 2 times the samples and 2-bit, you need 4 and 3 - bit you need 6?
 
Engineering news on Phys.org
I have a feeling that it is to do with the fact that increasing the step size by a factor of two will increase the quantising noise (aka distortion) power by a factor of four. Hence, you need to spread that power over four times the bandwidth by sampling at four times the rate. That's where the "4" comes from.
 
Saying the same thing in different words:
Doubling the sample frequency reduces the inband quantizing noise by 3dB (That is, half the quantizing noise is now above the frequency you care about). Since each bit represents 6dB, you need to double it twice to get a full bit of noise reduction in the band of interest. You then have to filter down to the original bandwith to get rid of the undesired noise and then decimate.

See http://www.atmel.com/images/doc8003.pdf
 
big guy is correct regarding non-noise-shaped oversampling. there are noise-shaped converters that are something like 3.something MHz sampling, have a 1-bit internal converter, and because of noise shaping, have resolution better than you would normally find with [itex]4^n[/itex] oversampling.
 
Sigma delta converters (or delta sigma) converters essentially push the noise into a higher band. Think about a 1 bit A/D being fed a sine wave at 1/100 the the sample rate/ It has 50 samples of 1 followed by 50 samples of zero. Now, imagine we designed a converter that switched between 1 and zero very often such that the average DC value from filtering the output was closer to the actual sine wave value. That is done by integrating the total error between the input and output and deciding at every sample whether or to increase or decrease that value (trying to keep total error at zero). The result is more high frequency noise and better low frequency accuracy. There are 1st order, 2nd order, 3rd order, etc delta-sigma modulators, each puching more quantizing noise into the upper spectrum where it is filtered during decimation. The simplest (but not the optimal) filter for a sigma-delta is a count and dump that simply counts the number of 1's in your decimation period and that becomes the new sample.

Here is an example.
http://www.cds.caltech.edu/~murray/amwiki/images/8/89/Deltasigma-simulation.png

A traditional 1 bit converter would be a square wave. Think about the frequency of the quantizing noise from a simple square wave as opposed to the S/D pulse train.
 
Last edited:
People often lump delta-sigma modulation together with oversampling but they are distinct, as meBigGuy and rbj. There are many cases in practice where oversampling is used without noise shaping (although I can't imagine a case where noise shaping would be useful without oversamping...)
 
analogdesign said:
People often lump delta-sigma modulation together with oversampling but they are distinct, as meBigGuy and rbj. There are many cases in practice where oversampling is used without noise shaping (although I can't imagine a case where noise shaping would be useful without oversamping...)

often we use noise shaping without oversampling is used to steer quantization error from where you don't want it to where you can tolerate it, even if it is not out-of-band.

http://www.dspguru.com/dsp/tricks/fixed-point-dc-blocking-filter-with-noise-shaping

i've also done this for regular IIR filtering of audio data in fixed point, say, with the old Motorola 56K.
 

Similar threads

  • · Replies 16 ·
Replies
16
Views
9K
  • · Replies 6 ·
Replies
6
Views
2K
Replies
6
Views
2K
Replies
17
Views
6K
  • · Replies 7 ·
Replies
7
Views
3K
Replies
7
Views
4K
Replies
5
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 1 ·
Replies
1
Views
11K