# I Measurement uncertainty

1. Jul 16, 2017 at 3:24 PM

### fonz

I have seen similar threads on here but not one with any detailed answer so I felt I would ask myself.

I took a short undergrad module in measurement and uncertainty, intended to prepare for the numerous lab sessions and reports that would follow in the proceeding modules. In that particular module the concept of uncertainty was introduced along with a basic method of calculating the uncertainty from a set of results. Without going into detail the method to calculate the uncertainty essentially relied upon repeated measurements to be taken and the uncertainty derived by some statistical analysis of the results.

What never crossed my mind at the time was the question where does the accuracy (and precision) of the instrument used to record the results factor into this estimate?

Suppose that I were to make just one measurement using an instrument with a specified accuracy then want to find the uncertainty of the measurement I have just made, how is this acheived?

And finally, let's say I have the stated accuracy of the instrument from the manufacturer and the calibration tolerance. What is the relationship between the two and how do they contribute to the uncertainty?

2. Jul 16, 2017 at 4:47 PM

### Staff: Mentor

In general you can split your uncertainty into two components:
- systematic: you are wrong in the same way no matter how often you repeat the measurement. This can be a ruler that is too long or a poorly calibrated scale or similar things.
- statistical: you are wrong in a random way each measurement. You can often estimate this by taking multiple measurements. If that is not feasible you can use other, experiment-dependent approaches to estimate this uncertainty.
If in doubt, ask the manufacturer.

3. Jul 17, 2017 at 1:43 PM

### fonz

Thank you for your reply. When you say experiment-dependent approaches is there a standard for estimating uncertainty in this way?

EDIT: I just did a quick Wikipedia search and found that there are two types of uncertainty estimates; Type A and Type B. I suspect that the module I took described the Type A method whereas my question appears to be answered by the description of the Type B method. Can you confirm?

Also, is the calibration tolerance a measure of uncertainty in the same way that the manufacturer's stated accuracy is a measure of uncertainty? and are they systematic uncertainties, random uncertainties or a combination of both?

Last edited: Jul 17, 2017 at 2:27 PM
4. Jul 17, 2017 at 7:46 PM

### Staff: Mentor

It depends on the experiment and the analysis method, there is no rule that fits all (or even most) experiments.
Yes.
If you use the same scale for all measurements, a wrong scale will have the same deviation in every measurement (assuming the measurements are not done at completely different points of the scale). It is a systematic uncertainty.

5. Jul 18, 2017 at 1:18 PM