Hello, I am confused as to how to determine the accuracy of a measurement of an AC signal from a Data Acquistion Board. I went to NI's website to look at the the specifications of the board I was using (a USB-6009) here, and I don't know whether the "accuracy at full scale" is referring to the scale setting of the board or the scale of the signal that the board is actually receiving. If someone could clear this up for me, I would really appreciate it. Thanks!
It depends on the range you selects. For measure wrt GND, range it's +/-10 V It means that you have to divide 20V by 2^n, n length of the output register.
Not sure what you mean by "AC signal from..." You are using an input, correct? If so, Quinzios equation is for the accuracy of the A/D but you need to also consider your input sensing device. There will be error there that will propagate through your DAQ board.
Yes, that's right. I am measuring a signal that is input into the DAQ from a function generator. So what I am asking is how can I determine the error in the measurement I am taking? I'm pretty ignorant on how to use these things, so if you could avoid using acronyms that would be helpful. I don't know what an A/D is for instance...
Okay, sorry about the acronyms. A/D is a common notation for Analog to Digital converter. i.e. the electronics that, in your case, takes a sample of the Analog signal from the function generator and converts it to the digital value, 12 or 14 bits, with a maximum sample rate of 48 kilo-sample per second. (From the data sheet: 8 analog inputs at 12 or 14 bits, up to 48 kS/s) In order to find the error you need to know the error/uncertainty of the function generator for a particular setting. (see your function generators specifications) e.g. with a sine wave frequency setting of 100 hertz and a amplitude setting of 1 volt, there will be an uncertainty in the frequency and an uncertainty in the amplitude. Then there's the error/uncertainty of the A/D. The error of the function generator will get propagated through the A/D so it's not necessarily as simple as adding the errors to get a final accuracy. This is called Propagation of uncertainty. The article has some example formulas with two real variables A & B with standard deviations σ_{A} & σ_{B}.
All right, I think I have got it figured out now. The resolution of the Analog to Digital conversion that the DAQ makes can be calculated using the formulas here: http://en.wikipedia.org/wiki/Analog-to-digital_converter#Resolution Given 14 bits, that means that the resolution that the board can produce at a ±10V would be 1.221mV. However, since there are a number of other factors that could effect the measurement produced, it would probably be better to use the values on NI's website for the range that the measurement is made at. I'm assuming the table titled "Absolute accuracy at full scale, differential" is what I need to use. So for ±10V the actual level of inaccuracy is 7.73mV. However, now I can't figure out what the uncertainty is with respect to the signal produced by the function generator. The Specifications state that the "accuracy" and "stability" are both ±20ppm, but it seems like 10^-6 Hz and Volts are awfully small. Does it actually have something to do with the resolution? I can't find any guides that will tell me what all these numbers really mean....
Good quality test equipment should have a very high accuracy. In my opinion, an accuracy and stability of ±20ppm is good enough that you may be able to consider it << less than the D/A accuracy; hence you could probably ignore it depending on the significance in measurement you are looking for.