berkeman said:
Yeah, that's a useful trick to have in your EE bag of tricks.
Ratiometric conversion is not a trick. It is a professional design technique that has progressively risen in importance over the years. It is now more important than ever. As an example, accurate high resolution digital scales are not practical without ratiometric conversion.
sophiecentaur said:
But why? (Except to prove that it can be done.) …
… There was a time when component count was everything. Nowadays, the optimum solution to most problems is digital - with good reason.
Because ratiometric conversion is now so widely applied, support is provided by many microcontroller families, for example dsPIC. That makes it possible to program a single chip solution to the analogue, digital conversion, signal processing and I/O.
sophiecentaur said:
Engineering is about doing things the cheapest way, consistent with accuracy, reliability and performance.
Because ratiometric conversion uses a sampling converter there is no requirement to synchronously sample and hold two channels prior to performing two conversions. It therefore doubles the maximum data conversion rate while halving the power and eliminating sample jitter and phase problems.
With ratiometric conversion, not only is cost lowest, but accuracy, reliability and performance are all significantly improved.
sophiecentaur said:
Using a variable reference for an ADC means that the multiplication process is asymmetrical.
There is always a fundamental problem with division by zero. The OP specified voltages between 0 and 5 volts, not –5V through zero to +5V, so the OP clearly does not require bipolar ratio computation. There will be no reference polarity reversal relative to zero volts, so a four quadrant multiplying converter is not needed. Where a ratiometric conversion is needed it is almost always the case that the reference does not change polarity and is greater than the signal.
sophiecentaur said:
Phase and frequency response of the reference input is unlikely to be the same as that of the input
Frequency response does not need to be the same in this case as the OP specified slowly varying signals. The reference voltage input bandwidth on ratiometric converters now often matches the signal input bandwidth because they use identical input circuitry to cancel drift.
sophiecentaur said:
and what about linearity?
What about it? Converters these days have linearity better than ½ LSB. That was not the case 50 or even 25 years ago.
sophiecentaur said:
What is the specification for performance with an out of range reference input voltage?.
Reference input voltages are now often differential, by using mosfet rail to rail analogue design, both –Vin and +Vin will continue to function 0.3V beyond the supply rails. The reference and signal inputs are protected from more extreme voltages.
sophiecentaur said:
I was just wondering about the lineup procedure for such a set up.
I see no reason why it should not be self calibrating and need no adjustment. Any transducer errors are better corrected in software than with analogue trimming or adjustment.
When a typical ADC generates two successive digital values, those values might be at the low end of the scale, say 11 lsb and 10 lsb, when the CPU divides them it gets 1.1 but with a +/–20% accuracy. That is not the case with the ratiometric converter where the output resolution and accuracy is independent of input signal voltage. A 10 bit ratiometric converter should preserve +/–0.1% across the input range.
Ratiometric conversion also reduces the cost and component count by eliminating the need for a stable voltage reference. References can get very expensive for converters of 20 bits and above.
So to sum it all up, a ratiometric converter, (probably implemented in a microcontroller), will double the data rate and offer a single chip solution without any external analogue components.
It is an ideal digital solution.