Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

A method to improve the vertical accuracy of an oscilloscope?

  1. Jun 15, 2011 #1
    Hello

    we have a number of oscilloscopes with 1.5% vertical accuracy (DPO-4032)

    we would like to improve (reduce) this number, to improve the quality of our tests.

    someone has proposed using a DMM (Fluke 45) to "calibrate" the oscilloscope and impart the DMM's much better accuracy of 0.025% to the scope measurements.

    they suggest:

    - measure a constant DC value with the DMM, treat the result as "true"
    - measure the same DC value with the oscilloscope.
    - subtract the one from the other, and apply the difference as a "correction factor" to the scope

    it is claimed that this correction will allow future scope measurements (of AC varying signals) to use the DMM's accuracy. thus improving our test quality.

    Brilliant, right? So why do i feel like this is wrong? is the basic premise even valid?

    Thank you for considering, any and all responses will be greatly appreciated.
     
  2. jcsd
  3. Jun 15, 2011 #2
    It is invalid because the scope is not off by a constant 1.5%. If the case were that simple, it could easily just be calibrated better. The 1.5% accuracy means it will be up to 1.5% off the 'true' value. The error will not be a constant.

    Sometimes the output will be exactly the true value, sometimes it will be 101.5% of the true value, sometimes it will be 98.5% of the true value - depending on the signal and circumstances.

    Trust me, if it really were 'that easy' to make the scopes more accurate, why wouldn't the company just do it in their production line?
     
  4. Jun 16, 2011 #3
    yes i agree. of course the scope is not always off by 1.5% ... but wouldn't one particular scope be off by some arbitrary amount, and remain fairly consistently off by that amount for at least a short period of time?

    for instance, say we're measuring a 1.0 Vpp signal offset at 0.5 VDC: take one unique scope, measure 0.5 VDC with a calibrated Fluke 45 and then the same 0.5 VDC with the scope, calculate the difference as the scope's "adjustment" and then *immediately* measure our 1.0 Vpp + 0.5 VDC signal.

    why wouldn't that particular scope measurement, adjusted to the calibrated Fluke 45's measurement taken just a fraction of a second prior, have the same accuracy as the Fluke 45? Or at least something very close to it?
     
    Last edited: Jun 16, 2011
  5. Jun 16, 2011 #4
    This is where digital oscilloscopes have an advantage: their accurate is determined by the ADCs used rather than deflection or analog circuitry accuracy.
     
  6. Jun 16, 2011 #5
    uh, thanks, but that's not an answer. of course the DPO-4032 is a digital scope. i don't even know who uses analog scopes these days.

    how can you use a 5-1/2 digit DMM (0.025% accuracy) to "calibrate" an oscilloscope with 1.5% accuracy?
     
  7. Jun 16, 2011 #6
    "how can you use a 5-1/2 digit DMM (0.025% accuracy) to "calibrate" an oscilloscope with 1.5% accuracy?"

    You probably can't or the manufacturer would have done that to increase the accuracy of their oscopes. I would bet $100 that the 1.5% accuracy is not a linear error characterisable with a DC voltage that you could simply tune the system to remove. If it was a linear error, the designers of the oscope would have calibrated it out. Whenever you see accuracies for test equipment like that, it means that there are non-linear errors over the range of voltage or frequency inputs the device was built to accept that can't be removed without greatly increasing the cost of the device.

    But... If you want to try to remove it for an AC signal, the best idea would be to use a function generator to put in an approximate of the signal you are going to be measuring (approximate amplitude and frequency range) and then try to observe what the differences are. Putting in a DC signal to try to tune for an AC signal isn't going to get you very far.
     
    Last edited: Jun 16, 2011
  8. Jun 16, 2011 #7
    Accuracy of oscilloscopes is a very difficult thing to determine and you can just magically get an accuracy of 1.5% by calibrating it to something more accurate. You're essentially at the mercy of your ADC's. If you want more accuracy you have to look into using "tricks" such as a large amounts of averaging and optimizing the sampling rate.
     
  9. Jun 16, 2011 #8

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member

    Accuracy and repeatability are not the same thing. If you can provide the scope with a signal of the same form as the signal you want to measure and with a known amplitude then, if the scope is reliable / consistent, you can do better than the published figure for accuracy. It may be necessary to wait until the temperature has settled down.
    This applies to both analogue and digital equipment.

    Another possible trick might be to subtract a known amplitude of signal from the test signal and examine the difference signal. This expanded difference signal would then be subject to your 1.5% accuracy. It would need some fancy circuit design though.
     
  10. Jun 16, 2011 #9
    Get a sig genny and a ratio transformer and compare the measureand with that on screen.
    I think mine is seven decade which should be enough accuracy for you.
     
    Last edited: Jun 16, 2011
  11. Jun 16, 2011 #10

    AlephZero

    User Avatar
    Science Advisor
    Homework Helper

    The scope error will almost certainly be frequency dependent, so measuring the error at DC will only help for low frequency signals.

    Any signal that is not a pure sine wave will contains a spectrum of different frequencies, and the different errors in each frequency component will distort the shape of the "graph" of the signal as well as changing its amplitude.

    Even for a DC signal, the errors may vary with the amplitude of the input. So if you measure the error accurately as say 1.23% at full scale, there is no guarantee it will be 1.23% for any other input level.
     
  12. Jun 16, 2011 #11

    Averagesupernova

    User Avatar
    Science Advisor
    Gold Member

    Sophie pretty much nailed it. Repeatability and accuracy are not the same. I've worked for a test equipment manufacturer and we did this sort of thing to 'characterize' RF generators from a couple MHz to a Gig. The Fluke signal generator allowed direct control over the RF attenuator on it's output through GPIB. We had a sensative measuring receiver that did the whole spectrum in steps in about 7 different amplitude levels. About half a dozen generators received this characterization fairly regularly. Results were tracked from one characterization to the next. Nothing ever changed very much if at all.
     
  13. Jun 16, 2011 #12
    Scopes are full of time related distortion. Leading edges don't have the same value as the settling and settled values, and subtle things like the amplifiers changing temperature over time, due to self heating, throw the readings off.
    The A/Ds are optimized more for response than accuarcy and the circuits suffer from recovery when overdriven.
    For all this, they do a pretty good job.
    If you can synthesize the waveform in phase with the test signal, than you may be able to use a differential probe (or transformer) to detect the error. Square waves are easy to synthesize using CMOS chips because they drive the signal to the two power supply rails.

    Given two or more of these "references," you can compare them for consistensy.

    Best of luck,

    Mike in Plano
     
  14. Jun 17, 2011 #13

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member

    When I was using scopes seriously, they were calibrated using a square wave and probes were tuned for optimum. Scopes have never, afaik, been used for high accuracy. Other methods are normally devised specially for really high accuracy of measurement.
     
    Last edited: Jun 17, 2011
  15. Jun 17, 2011 #14
    Maybe I was too nebulous. If it's a digital scope, what is the ADC being used (# of bits). You can only achieve accuracy as good as what's defined by the ADC bits and probably not even that good (resolution != accuracy). The DMM's 5-1/2 digits is 30 bits. Most oscilloscopes are 8-12 bits only or 3 to 3-1/2 digits. On a good day you might find a high end scope that is 24 bits or 4-1/2 (Again this is resolution, not even accuracy).

    If you waveform is repetitive, you can integrate over multiple cycles to get better vertical accuracy (most high-end digital scopes made over the last 30 years have this feature built-in). But this does not (and can never) work for single-shot waveforms.

    This is the inherent issue with all oscilloscopes: if you need very high vertical accuracy, especially single-shot, it's it's often the wrong instrument to use.

    Better to use other measurement instruments or techniques such as a sampling DMM (up to a certain frequency) like a Agilent/HP 3458A or some waveform integration technique.

    And in some cases you have to get completely out of the box: one time I got a call from a professor who was sure he needed a 50 GHz oscilloscope (which didn't exist back then) - turns out looking at what he was really trying to do all he really needed for a microwave frequency counter (he was thinking in terms of the "idiot" undergrad-taught measuring technique of using an oscilloscope to measure the frequency).
     
  16. Jun 18, 2011 #15
    Actually you can do better if you use the scope as an indicator in a nulling method.
    Or simply compare with a ratio tx as I said earlier. Then you could be up against the basic accuracy of the range switch, rahter than the ADC.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: A method to improve the vertical accuracy of an oscilloscope?
  1. Old Oscilloscope (Replies: 11)

Loading...