Calculating reliable bits of op-amp circuit

  • Context: Engineering 
  • Thread starter Thread starter geft
  • Start date Start date
  • Tags Tags
    Bits Circuit Op-amp
Click For Summary

Discussion Overview

The discussion revolves around calculating the reliable bits of an op-amp circuit, focusing on the relationship between amplifier gain, output voltage, and the resolution of an analog-to-digital converter (ADC). Participants explore how to determine the number of bits required for accurate representation given the error introduced by the op-amp.

Discussion Character

  • Technical explanation
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • One participant expresses confusion over obtaining a different answer (5 bits) compared to the given answer (6 bits).
  • Another participant calculates the maximum output voltage for an infinite gain op-amp and discusses the minimum output for a finite gain, suggesting that calculating the percentage error is straightforward.
  • A participant mentions the gain error of 2.44% and proposes that achieving 7 reliable bits requires 254 bits, while 6 bits requires 252 bits.
  • Discussion includes the implications of using an 8-bit converter, noting that its resolution (0.391%) is significantly better than the amplifier's error (2.44%), suggesting it may be excessive.
  • One participant questions whether their answer or the given one is incorrect, noting that their calculations for 5 bits (3.125% error) and 6 bits (1.563% error) do not match the amplifier's error.
  • Another participant reiterates the need for the converter's resolution to be better than the amplifier's error, discussing the implications of using 5, 6, or 7 bits and concluding that a 6-bit converter is the most economical choice.

Areas of Agreement / Disagreement

Participants express differing views on the correct number of reliable bits and the implications of amplifier error on ADC selection. There is no consensus on the correct answer, and multiple competing perspectives remain regarding the optimal choice of bits for the converter.

Contextual Notes

Participants reference specific calculations and assumptions about amplifier gain and ADC resolution, but the discussion does not resolve the discrepancies in their findings or the assumptions underlying their calculations.

geft
Messages
144
Reaction score
0
For some reason I can't get the answer right. It is given as 6 bits but I calculated it to be 5. What am I doing wrong?
 

Attachments

  • Untitled.png
    Untitled.png
    18.6 KB · Views: 513
Physics news on Phys.org
The max output is when Vin = 10 mV and equals 2.5000V if the op amp gain Aol is infinite.

The min output you would get for the same output if Aol = 1e4 is as you have correctly computed.

So calculating the % error is obvious.

As for finding the max number of accurate bits: how many bits would the adc have to have in order to produce a 1 lsb (least significant bit) error?
 
The gain error is 2.44% as is given in parentheses, but since that is the next part of the question I didn't think it's needed for part (ii). To have 7 reliable bits (1111 111X), it needs 254 bits and for 6 bits (1111 11XX), it needs 252 bits?
 
Assume an 8 bit converter. For that the LSB is 1/28 of full scale. That would be 1/28 = 0.391% of full scale. Since you can get 2.44% error from the amplifier, obvioulsly an 8 bit converter would be overkill.

Now try that with 7 bit, then 6 bit, then 5 bit converter.

BTW not to confuse you further, but the answer to part b is really not correct. The max error is +/- 1.22%. For a 2.500V input the output is + 2.4695 +/- 0.0305V.

In short, you can do part b first, then do part a. Or you can work with just the voltages as you did. Same difference.
 
The incorrect answer is mine or the given one? With 5 bits it's 3.125% and for 6 bits it's 1.563%, neither of which is 2.44%, so I guess neither is correct?
 
geft said:
The incorrect answer is mine or the given one? With 5 bits it's 3.125% and for 6 bits it's 1.563%, neither of which is 2.44%, so I guess neither is correct?

The point is: your converter's resolution should be better than the error ascribable to the amplifier, but not beyond that.

With a 5 bit your resolution is 3.13%, obviously worse than the error, so that's not a good choice. With 6 bits the resolution is 1.56% which is below the amp's error, so that's better than a 5 bit converter. If you go 7 bits you get 0.078% resolution which is 2:1 better than with a 6 bit, but the LSB now is meaningless.

Bottom line: pick the converter with resolution just better (lower %) than the amp error, but not unnecessarily better. That makes a 6 bit the optimum, the most economical, choice. (Converters get more expensive as the no. of bits increases, other things being equal).
 

Similar threads

  • · Replies 8 ·
Replies
8
Views
3K
Replies
21
Views
3K
Replies
15
Views
3K
Replies
34
Views
4K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 44 ·
2
Replies
44
Views
6K
  • · Replies 12 ·
Replies
12
Views
4K
Replies
1
Views
2K
  • · Replies 0 ·
Replies
0
Views
2K