B Calculating Uncertainty of Mean for 0.5 mm Ruler Measurements

utp9
Messages
12
Reaction score
2
I am following this: avntraining.hartrao.ac.za/images/Error_Analysis.pdf

I have a ruler with an uncertainty of ± 0.5mm. I made a calculation subtracting one measurement of the ruler, from another measurement, making the uncertainty for the data ± 1.0mm.

As I have four trials, I calculated the mean. Hence, I must calculate the uncertainty of the mean. Using the formula given in the document would the uncertainty be just that, or do I add the rulers uncertainty as well?

Example:
Using the Δxavg formula I get a uncertainty of 0.2 mm (rounded). So would that be my uncertainty? Or, do I add the rulers uncertainty after the first equation, so 1.0 mm to it, to get ± 1.2 mm?
 
Physics news on Phys.org
utp9 said:
I have a ruler with an uncertainty of ± 0.5mm
Do you know about the distinction between systematic errors and random errors ?

[edit]well, you must, because the word document you refer to is treating it. In your example you achieve a precision of 0.2 mm, but if the ruler is 0.5 mm off, that last error is common to all observations. So you can report ##x\pm 0.2 _{( {\sf stat})} \pm 0.5 _{( {\sf syst})} ##.
In case you want to report a single error, add in quadrature (resulting in ##\pm 0.5## too because of rounding).
 
Last edited:
  • Like
Likes utp9 and hutchphd
This is a situation where we must choose between giving advice about how to follow a set of directions (http://avntraining.hartrao.ac.za/images/Error_Analysis.pdf) vs discussing the complicated mathematics that lies behind that set of directions and perhaps advising that they be disobeyed or improved.

Taking the former course:

utp9 said:
I have a ruler with an uncertainty of ± 0.5mm. I made a calculation subtracting one measurement of the ruler, from another measurement, making the uncertainty for the data ± 1.0mm.

If you judge the "uncertainty" of the ruler by half the distance between its finest graduations, the example of measuring the distance between the legs of the grasshopper makes it clear that the "uncertainty" in the distance measurement is not to be calculated as ##\pm (0.5 + 0.5)##, but rather as ##\pm \sqrt{ 0.5^2 + 0.5^2}##.

As I have four trials, I calculated the mean. Hence, I must calculate the uncertainty of the mean. Using the formula given in the document would the uncertainty be just that
Just what? What formula?
Are you referring to the example where the uncertainty of the mean is calculated for two data sets? In that example, the uncertainty of the mean for each data set is calculated by ##\frac{R}{2 \sqrt{N}}##, where ##R## is the maximum measurement minus the minimum measurement and ##N## is the number of measurments.

or do I add the rulers uncertainty as well?

That's a good question! The example of measuring perimeter of a fence shows how to account for (not literally "add") the uncertainties in individual measurements. However the measurements of the different sides are not each measurements of the same thing. Is there more material in the text? We need an example where several measurements of the same thing are taken, each with given uncertainty.
 
Last edited:
  • Skeptical
  • Like
Likes utp9 and BvU
BvU said:
Do you know about the distinction between systematic errors and random errors ?

[edit]well, you must, because the word document you refer to is treating it. In your example you achieve a precision of 0.2 mm, but if the ruler is 0.5 mm off, that last error is common to all observations. So you can report ##x\pm 0.2 _{( {\sf stat})} \pm 0.5 _{( {\sf syst})} ##.
In case you want to report a single error, add in quadrature (resulting in ##\pm 0.5## too because of rounding).
Stephen Tashi said:
This is a situation where we must choose between giving advice about how to follow a set of directions (http://avntraining.hartrao.ac.za/images/Error_Analysis.pdf) vs discussing the complicated mathematics that lies behind that set of directions and perhaps advising that they be disobeyed or improved.

Taking the former course:
If you judge the "uncertainty" of the ruler by half the distance between its finest graduations, the example of measuring the distance between the legs of the grasshopper makes it clear that the "uncertainty" in the distance measurement is not to be calculated as ##\pm (0.5 + 0.5)##, but rather as ##\pm \sqrt{ 0.5^2 + 0.5^2}##.Just what? What formula?
Are you referring to the example where the uncertainty of the mean is calculated for two data sets? In that example, the uncertainty of the mean for each data set is calculated by ##\frac{R}{2 \sqrt{N}}##, where ##R## is the maximum measurement minus the minimum measurement and ##N## is the number of measurments.
That's a good question! The example of measuring perimeter of a fence shows how to account for (not literally "add") the uncertainties in individual measurements. However the measurements of the different sides are not each measurements of the same thing. Is there more material in the text? We need an example where several measurements of the same thing are taken, each with given uncertainty.

Would something like this work?
Annotation 2020-04-13 160408.png

Or would that contradict this?
Annotation 2020-04-13 160532.png

I have equations for the rest of the quantities before Table 5 employed in previous tables. Hence, whey there is only one equation and calculation.

Thanks for the replies :)
 
Last edited:
utp9 said:
Would something like this work?
Hard to say without explanation what all this is and how it came about.
 
  • Like
Likes utp9
utp9 said:
As I have four trials,
I don't understand how "four trials" relates to the chart for your data. What is the definition of a "trial"? Was each trial a measurement performed on the same object?
 
  • Like
Likes utp9
BvU said:
Hard to say without explanation what all this is and how it came about.
Stephen Tashi said:
I don't understand how "four trials" relates to the chart for your data. What is the definition of a "trial"? Was each trial a measurement performed on the same object?

I apologize, obviously you can't see what I have written down.
I actually read the whole PDF document, instead of just browsing over it and now I understand what I'm supposed to do.

Here's the data I was referring to:
Annotation 2020-04-13 191843.png
Annotation 2020-04-13 192057.png

What I've done here is logical, right?
I can now use Δxavg in my graphs for error bars?
 
Last edited:
utp9 said:
and now I understand what I'm supposed to do.

If what your are supposed to do is to follow the rules set out in the document, I think you have followed them correctly as far as computing individual "uncertainties". The author of document doesn't say how to compute the length of error bars.

Using your notation, my guess is that the author would compute the "uncertainty" of the average of N measurements, each of which has an uncertainty of ##\triangle z_{sys}## associated with it as: ##\sqrt{ (\triangle x_{avg})^2 + \frac{ (\triangle z_{sys})^2} {N} }##. We should ask for other opinions!

What I've done here is logical, right?

The document you have (like many others) does not give reasons that justify rules for computing uncertainties. To justify the rules in a logical manner requires a sophisticated knowledge of mathematical statistics. Before we can prove that a rule provides a "good" or "correct" estimate, we must define what "correctness" or "goodness" mean in a statistical context. That alone is not a simple task. My guess is that your studies don't ask you to do this yet.
 
Last edited:
  • Like
Likes jim mcnamara and hutchphd
Back
Top