When does the calibration uncertainty contribute to a measurement uncertainty?

Click For Summary
SUMMARY

The discussion centers on the contribution of calibration uncertainty to overall measurement uncertainty, particularly in the context of the GUM 1995 method. Participants highlight that while calibration certificates provide error and uncertainty data, the calibration uncertainty is often not included in measurement uncertainty calculations. This omission is attributed to the Test Accuracy/Uncertainty Ratio typically exceeding 4:1, especially for electrical instruments. However, literature from journals like Metrologia suggests that comprehensive error budgets should account for all sources of uncertainty, including calibration uncertainty.

PREREQUISITES
  • Understanding of the GUM 1995 method for uncertainty calculations
  • Familiarity with calibration certificates and their components
  • Knowledge of Test Accuracy/Uncertainty Ratios
  • Basic principles of root sum of squares (RSS) for combining uncertainties
NEXT STEPS
  • Research the GUM 1995 method for detailed uncertainty analysis
  • Explore the concept of Test Accuracy/Uncertainty Ratios in various calibration contexts
  • Examine case studies in Metrologia that address comprehensive error budgets
  • Learn about the application of root sum of squares in uncertainty calculations
USEFUL FOR

Metrologists, calibration engineers, quality assurance professionals, and anyone involved in precision measurement and uncertainty analysis will benefit from this discussion.

fonz
Messages
151
Reaction score
5
Often a calibration certificate for an instrument has the error found during the calibration as well as the uncertainty associated calibration itself.

I'm reasearching uncertainty calculations using the GUM 1995 method and I haven't found one yet that includes the uncertainty of the calibration result as a source of measurement uncertainty for a particular instrument. Only uncertainty derived from the error found by the calibration process is used. Is there a reason for this? Is it likely because in most cases there is a Test Accuracy / Uncertainty Ratio greater than 4:1?
 
Engineering news on Phys.org
I suspect it will depend on where you are looking and what you are calibrating. The uncertainty associated with the calibration "experiment" itself will in many cases (pretty much all electrical instruments) be way lower (in some cases a couple of orders of magnitude) than the uncertainty associated with the instrument being calibrated.
That said, if you look up some papers from journals such as Metrologia you will find cases where every part of the error budget is taken into account.
 
f95toli said:
I suspect it will depend on where you are looking and what you are calibrating. The uncertainty associated with the calibration "experiment" itself will in many cases (pretty much all electrical instruments) be way lower (in some cases a couple of orders of magnitude) than the uncertainty associated with the instrument being calibrated.
That said, if you look up some papers from journals such as Metrologia you will find cases where every part of the error budget is taken into account.

Hi thanks for the reply. That is essentially my understanding and I think that the common requirement for a Test Accuracy / Uncertainty Ratio of 4:1 or even 10:1 validates your statement.

It has got me thinking though. When considering the uncertainty of a measurement made by an instrument with a known calibration. Is is correct to assume the error quoted by a calibration certificate can be considered a standard uncertainty or expanded uncertainty and if so, at what level of confidence? For example, I have shown an image of a typical calibration chart for a pressure sensor. The calibration tolerance is shown in blue, and the calibration points highlighted in green along with the error bars representing the uncertainty of the calibration itself. Is it correct to say that the contribution of uncertainty due to the accuracy of this device on any measurement made by it is within +/-2% of span? If so, at what level of confidence?

Calibration Uncertainty.PNG
 
The short answer is "always." All sources of uncertainty always contribute to the total uncertainty of a measurement, including calibration uncertainty. The question, as you've already discussed, is whether that calibration uncertainty is negligible compared to the rest of the sources. When you do a root sum of squares to add uncertainties, small contributions fade away pretty quickly.
 
  • Like
Likes   Reactions: hutchphd
Would it be appropriate to assume a rectangular probability distribution for the calibration tolerance (blue line in the previous figure) and combine it with the expanded uncertainty of the calibration (green error bars) using the root sum squares method to obtain a standard uncertainty for the device at calibration?

If the 'as-found' calibration error is much less than the calibration tolerance then I suppose that is quite conservative so potentially the largest calibration error could be used and assumed to be a rectangular probability distribution. Combining that with the standard uncertainty of the calibration itself using RSS would then derive a standard uncertainty taking into account the calibration of the device and the uncertainty of the calibration itself. It seems sensible to me but it's not made clear in the GUM.
 
fonz said:
If the 'as-found' calibration error is much less than the calibration tolerance then I suppose that is quite conservative so potentially the largest calibration error could be used and assumed to be a rectangular probability distribution. Combining that with the standard uncertainty of the calibration itself using RSS would then derive a standard uncertainty taking into account the calibration of the device and the uncertainty of the calibration itself. It seems sensible to me but it's not made clear in the GUM.
I think this is not a good strategy unless you have more data about the calibration. For instance there might be a systematic change (say perhaps Temperature in your example) that would shift that green line to the other side of the calibration tolerance. Unless you know that the calibration data includes the entire calibration parameter space this is not a good practice.
If you need numbers better than the 2% band, it looks as though the instrument is capable of higher precision, but you would need to supply a correction factor using "controls" i.e. known outcomes. This is often done for medical diagnostic devices
 

Similar threads

  • · Replies 7 ·
Replies
7
Views
1K
  • · Replies 21 ·
Replies
21
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 0 ·
Replies
0
Views
4K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 0 ·
Replies
0
Views
2K
  • · Replies 48 ·
2
Replies
48
Views
5K