Accuracy of measurements from DAQ board

Click For Summary

Discussion Overview

The discussion revolves around determining the accuracy of measurements from an AC signal using a Data Acquisition (DAQ) board, specifically the USB-6009 model. Participants explore various factors affecting measurement accuracy, including the specifications of the DAQ board, the role of the input sensing device, and the propagation of errors through the system.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant questions whether "accuracy at full scale" refers to the scale setting of the board or the actual signal being received.
  • Another participant notes that the measurement range selected affects the accuracy and mentions the need to consider the output register's bit length.
  • Concerns are raised about the accuracy of the A/D conversion and the need to account for errors from the input sensing device, which may propagate through the DAQ board.
  • A participant explains that the error in measurement depends on the uncertainty of the function generator's settings, including frequency and amplitude.
  • Discussion includes the calculation of resolution based on the number of bits in the A/D conversion and the significance of the absolute accuracy provided by the manufacturer.
  • One participant expresses confusion regarding the meaning of accuracy and stability specifications from the function generator, particularly the ±20ppm value.
  • Another participant suggests that the accuracy and stability of the function generator may be negligible compared to the DAQ's accuracy, depending on the measurement significance.

Areas of Agreement / Disagreement

Participants do not reach a consensus on the best method to determine measurement accuracy, with multiple competing views on the significance of various factors and specifications involved.

Contextual Notes

Limitations include the potential misunderstanding of specifications, the dependence on the definitions of accuracy and resolution, and the unresolved nature of how to effectively combine uncertainties from different components in the measurement system.

HyperSniper
Messages
39
Reaction score
2
Hello, I am confused as to how to determine the accuracy of a measurement of an AC signal from a Data Acquistion Board.

I went to NI's website to look at the the specifications of the board I was using (a USB-6009) http://sine.ni.com/ds/app/doc/p/id/ds-218/lang/en#header0", and I don't know whether the "accuracy at full scale" is referring to the scale setting of the board or the scale of the signal that the board is actually receiving.

If someone could clear this up for me, I would really appreciate it. Thanks!
 
Last edited by a moderator:
Engineering news on Phys.org
It depends on the range you selects.
For measure wrt GND, range it's +/-10 V
It means that you have to divide 20V by 2^n, n length of the output register.
 
HyperSniper said:
Hello, I am confused as to how to determine the accuracy of a measurement of an AC signal from a Data Acquistion Board.

Not sure what you mean by "AC signal from..." You are using an input, correct?

If so, Quinzios equation is for the accuracy of the A/D but you need to also consider your input sensing device. There will be error there that will propagate through your DAQ board.
 
dlgoff said:
Not sure what you mean by "AC signal from..." You are using an input, correct?

If so, Quinzios equation is for the accuracy of the A/D but you need to also consider your input sensing device. There will be error there that will propagate through your DAQ board.

Yes, that's right. I am measuring a signal that is input into the DAQ from a function generator.

So what I am asking is how can I determine the error in the measurement I am taking?

I'm pretty ignorant on how to use these things, so if you could avoid using acronyms that would be helpful. I don't know what an A/D is for instance...
 
HyperSniper said:
Yes, that's right. I am measuring a signal that is input into the DAQ from a function generator.

So what I am asking is how can I determine the error in the measurement I am taking?

I'm pretty ignorant on how to use these things, so if you could avoid using acronyms that would be helpful. I don't know what an A/D is for instance...

Okay, sorry about the acronyms. A/D is a common notation for Analog to Digital converter. i.e. the electronics that, in your case, takes a sample of the Analog signal from the function generator and converts it to the digital value, 12 or 14 bits, with a maximum sample rate of 48 kilo-sample per second. (From the data sheet: 8 analog inputs at 12 or 14 bits, up to 48 kS/s)

In order to find the error you need to know the error/uncertainty of the function generator for a particular setting. (see your function generators specifications) e.g. with a sine wave frequency setting of 100 hertz and a amplitude setting of 1 volt, there will be an uncertainty in the frequency and an uncertainty in the amplitude. Then there's the error/uncertainty of the A/D. The error of the function generator will get propagated through the A/D so it's not necessarily as simple as adding the errors to get a final accuracy. This is called http://en.wikipedia.org/wiki/Error_propagation" .
 
Last edited by a moderator:
All right, I think I have got it figured out now.

The resolution of the Analog to Digital conversion that the DAQ makes can be calculated using the formulas here:

http://en.wikipedia.org/wiki/Analog-to-digital_converter#Resolution

Given 14 bits, that means that the resolution that the board can produce at a ±10V would be 1.221mV.

However, since there are a number of other factors that could effect the measurement produced, it would probably be better to use the values on NI's website for the range that the measurement is made at. I'm assuming the table titled "Absolute accuracy at full scale, differential" is what I need to use. So for ±10V the actual level of inaccuracy is 7.73mV.

However, now I can't figure out what the uncertainty is with respect to the signal produced by the function generator. http://www.gwinstek.com/en/product/productdetail.aspx?pid=5&mid=72&id=92" state that the "accuracy" and "stability" are both ±20ppm, but it seems like 10^-6 Hz and Volts are awfully small. Does it actually have something to do with the resolution? I can't find any guides that will tell me what all these numbers really mean...
 
Last edited by a moderator:
HyperSniper said:
However, now I can't figure out what the uncertainty is with respect to the signal produced by the function generator. http://www.gwinstek.com/en/product/productdetail.aspx?pid=5&mid=72&id=92" state that the "accuracy" and "stability" are both ±20ppm, but it seems like 10^-6 Hz and Volts are awfully small. Does it actually have something to do with the resolution? I can't find any guides that will tell me what all these numbers really mean...

Good quality test equipment should have a very high accuracy. In my opinion, an accuracy and stability of ±20ppm is good enough that you may be able to consider it << less than the D/A accuracy; hence you could probably ignore it depending on the significance in measurement you are looking for.
 
Last edited by a moderator:

Similar threads

Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
10K
  • · Replies 11 ·
Replies
11
Views
13K
  • · Replies 34 ·
2
Replies
34
Views
4K