Engineering Calculating reliable bits of op-amp circuit

AI Thread Summary
The discussion centers on calculating the reliable bits of an op-amp circuit, where the expected output is 6 bits, but a calculation yields only 5 bits. The maximum output voltage is 2.5000V when the input voltage is 10 mV, assuming infinite gain, while the minimum output with a finite gain of 10,000 has been correctly computed. The conversation highlights that the resolution of the ADC must exceed the amplifier's error, which is 2.44%, to ensure accuracy; thus, a 6-bit converter is deemed optimal. It is noted that a 5-bit converter has a resolution of 3.13%, making it unsuitable, while a 6-bit converter at 1.56% is acceptable. Ultimately, the most economical choice is a 6-bit converter, balancing cost and performance effectively.
geft
Messages
144
Reaction score
0
For some reason I can't get the answer right. It is given as 6 bits but I calculated it to be 5. What am I doing wrong?
 

Attachments

  • Untitled.png
    Untitled.png
    18.6 KB · Views: 487
Physics news on Phys.org
The max output is when Vin = 10 mV and equals 2.5000V if the op amp gain Aol is infinite.

The min output you would get for the same output if Aol = 1e4 is as you have correctly computed.

So calculating the % error is obvious.

As for finding the max number of accurate bits: how many bits would the adc have to have in order to produce a 1 lsb (least significant bit) error?
 
The gain error is 2.44% as is given in parentheses, but since that is the next part of the question I didn't think it's needed for part (ii). To have 7 reliable bits (1111 111X), it needs 254 bits and for 6 bits (1111 11XX), it needs 252 bits?
 
Assume an 8 bit converter. For that the LSB is 1/28 of full scale. That would be 1/28 = 0.391% of full scale. Since you can get 2.44% error from the amplifier, obvioulsly an 8 bit converter would be overkill.

Now try that with 7 bit, then 6 bit, then 5 bit converter.

BTW not to confuse you further, but the answer to part b is really not correct. The max error is +/- 1.22%. For a 2.500V input the output is + 2.4695 +/- 0.0305V.

In short, you can do part b first, then do part a. Or you can work with just the voltages as you did. Same difference.
 
The incorrect answer is mine or the given one? With 5 bits it's 3.125% and for 6 bits it's 1.563%, neither of which is 2.44%, so I guess neither is correct?
 
geft said:
The incorrect answer is mine or the given one? With 5 bits it's 3.125% and for 6 bits it's 1.563%, neither of which is 2.44%, so I guess neither is correct?

The point is: your converter's resolution should be better than the error ascribable to the amplifier, but not beyond that.

With a 5 bit your resolution is 3.13%, obviously worse than the error, so that's not a good choice. With 6 bits the resolution is 1.56% which is below the amp's error, so that's better than a 5 bit converter. If you go 7 bits you get 0.078% resolution which is 2:1 better than with a 6 bit, but the LSB now is meaningless.

Bottom line: pick the converter with resolution just better (lower %) than the amp error, but not unnecessarily better. That makes a 6 bit the optimum, the most economical, choice. (Converters get more expensive as the no. of bits increases, other things being equal).
 
Back
Top