Simple error propagation questions

In summary, the two rules stated in the book are:1. the uncertainty of each single measure is taken to be the sensitivity of the instrument2. if a quantity is obtained by multiplying a measured quantity times an exact number, the uncertainty is said to be "a" times the uncertainty on the original measure.
  • #1
FranzDiCoccio
342
41
Hi,
I'm looking at an Italian high-school physics textbook. The subject is uncertainty propagation, and the target is 9th grade students. The book is allegedly by J.S. Walker, but I'm not sure how much it was "redacted" by the Italian editor.

I am a little puzzled by two rules that are stated in the book. I'd like to have your insight.
So, as I mention, the subject is uncertainty propagation. Nothing very complex. No sums in quadrature for errors, just the "worst cases" (the "provisional rules" in J.R. Taylor's book).
The two rules puzzling me are
  1. Repeated measures of the same quantity
  2. Measured quantity times an exact number
Case 1: the idea is that the same quantity is directly measured several times, using the same instrument.
The uncertainty of each single measure is taken to be the smallest difference that the instrument can measure (this is referred to as sensitivity of the instrument). So, for instance, 1 mm if the instrument is a pocket metric ruler.

Of course the best estimate for the measure is the mean of the measured values. However, the book gives a simplified rule for the uncertainty of the mean. The authors of the book probably thought that the standard deviation is too complex for student this age, and suggests that an estimate for the uncertainty in the measurements is what it calls "half-dispersion": that is half the difference between the largest and the smallest measure. This is a rough estimate, but it makes sense. There is a catch though. If the half-dispersion turns out to be less than the uncertainty in the measured quantities, the uncertainty for the mean is not the half-dispersion, but the sensitivity of the instrument.
I guess that this is to avoid a zero uncertainty when all the repeated measurements are the same...
Not pretty but it makes more or less sense.

Case 2. If a quantity is obtained by multiplying a measured quantity times an exact number "a", the uncertainty is said to be "a" times the uncertainty on the original measure. Again, this makes sense. If "a" is an integer, this is a generalization of the "simple" rule for sums of measured quantities.
Again, there is a catch: the uncertainty cannot be less than the sensitivity of the instrument used for the original measure.
This sounds strange to me. It kind of defeats one practical use of multiplying a measure times a constant.
I'm thinking e.g. of measuring the thickness of a sheet of paper by measuring the thickness of a stack of N (identical) sheets, and dividing by N.
In this case the uncertainty would be much larger than it reasonably is. Like if I have a thousand sheets 0.01 mm thick, I'd measure 1 cm for the total thickness, perhaps with an uncertainty of 1 mm (pocket ruler). The measure would be 0.01 mm, but its uncertainty would still be 1mm, i.e. 100 times larger.
Not a very precise measure.

Probably the point is that in this case one is not measuring the thickness of an individual object, but the thickness of the "average sheeet".

Can someone give me some more insight on these "catches" (if there is some)?
Thanks
 
Physics news on Phys.org
  • #2
There are lots of ideas about estimating uncertainties - basically it is all guesswork of some kind or another. The guiding principle is that you should make the smallest guess that you can be very sure is bigger than the "actual" value. The "uncertainty on an uncertainty value" is typically very large. This principle seems to be uppermost in the author's mind - though I suspect you are correct to suspect an editor has shaped these sections too.

If I take x,y,z as measurements and a,b,c as constants, then the cases are:

1. there are n independent measurements of x, so estimate x to be: ##x=\mu_x \pm \frac{1}{\sqrt{n}}\sigma_x##
... this would suggest a shortcut: if the distribution is assumed very symmetric, of taking half the interquartile range or a quarter (or sixth) of the range of the data.
Half the range would represent an estimate for the 2##\sigma## confidence limits.
It may be a decent guess, for well behaved statistics, if the sample size is very small and the person taking the measurement is not very skilled.

2. here ##z=ax## so ##\sigma_z = |a|\sigma_x## provided ##a\sigma_x \geq \sigma_x## where ##\sigma_x## is the instrument uncertainty.
Clearly concerned abut the case that |a|<1. But is that really a problem?

Your example is good, but there is no reason the units have to match either: consider also: ##E=hc/\lambda## ... and we have a wavelength of 1m with uncertainty 0.001m ... considering hc is order ##10^{-25}##Jm, it makes no sense to quote the uncertainty on energy to 0.001 somethig... what J? What if I wanted energy in MeV - it seems that the rule would have the uncertainty in E vary a great deal depending on an arbitrary decision about units! However, it does make sense to say that ##\sigma_E/E \geq \sigma_\lambda/\lambda## ... does the text provide examples?

Notes: If all the measurements were the same (so the variation lies inside the instrument resolution), or there are a small number of measurements, then ##\sigma_x## would be half the instrument resolution ... that is ##\pm##0.5mm by a standard desktop ruler.

It is common for basic texts to use twice this value since they are anticipating that students will be sloppy in the placement of the 0mm mark (which a skilled user can set to within the width of the line indicating 0mm). Skilled users can get the instrument resolution down to 0.5mm ... giving 0.25mm or 0.3mm uncertainties. Not usually expected of 9th grade (13yo) students.

Just because the text says this does not mean it has to be taught that way.
 
Last edited:
  • Like
Likes BvU
  • #3
Hi Simon,

thanks for your kind reply.
I think the the course is built to be "not too scary". Probably the idea is that some students would be scared by sigma and its definition.
After all, some students really seem to struggle with much simpler ideas.

I see your points about the "second case". I did not think of unit conversion, but it makes sense. However right now I do not see an example that would clarify the point to the average student. Anyway, to add something about the suggested "rule", it seems to me that it clashes with the intuition that a constant factor won't change the relative uncertainty. But, in my previous example, it would. Not sure I'm saying anything new.

I have to think whether to clarify this issue or let it go undetected. So far no student raised a point or made a question. I think a clarification would be worth while if we were to face the problem in the lab. I am not sure there will be an occasion soon, so perhaps it is not worth while to fill the students' heads with more rules that they won't practice anyway.

Thanks a lot again.
This was useful.
Francesco
 
  • #4
In NZ error propagation is simply not taught at 9th grade level ... where data is likely to be messy, students are encouraged to graph is and sight a line of best fit.

For option 2, I'd just tell them the book is simply wrong on this point. Brighter kids can be encouraged to figure it out from the basic rules.

The basic rules are simple enough: add measurements means adding standard errors, multiplying measurements you add relative errors. Then there are rules of thumb that act as shortcuts because, ultimately, these are estimations (they have done a unit on estimation right?). They are old enough to cope with that and there is no scary maths... everything follows.
 
  • #5
I learned that the uncertainty in a graduated measuring device (like a ruler or graduated cylinder) is 1/10 the spacing between the graduations. That's because your eye can usually discern a little better than the closest line. Realistically, I don't know if I can do 1/10 of a mm with a mm ruler, but 1/5 of a mm is no problem.
 

1. What is error propagation?

Error propagation is the process of determining the uncertainty or error in a final result based on the uncertainties or errors in the individual measurements or values used to calculate it. It is important in scientific research and data analysis to understand and account for any potential errors in the final results.

2. How do you calculate error propagation?

The formula for error propagation depends on the type of calculation being performed. For simple arithmetic operations (addition, subtraction, multiplication, division), the error is calculated by summing the individual errors in each value. For more complex equations, the error is calculated using the partial derivatives of each variable with respect to the final result. There are also computer programs and online calculators available to help with error propagation calculations.

3. What is the difference between absolute and relative error?

Absolute error is the difference between the measured or calculated value and the true or accepted value. It is expressed in the same units as the original measurement. Relative error, on the other hand, is the absolute error divided by the true or accepted value. It is typically expressed as a percentage and allows for comparison between different measurements with different units.

4. How can error propagation be minimized?

Error propagation can be minimized by using more precise instruments and measurements, taking multiple readings, and using statistical methods to analyze data. It is also important to identify and account for any potential sources of error in the experimental setup.

5. What is the significance of error propagation in scientific research?

Error propagation is crucial in scientific research as it allows us to understand and quantify the uncertainty in our results. This helps us to determine the reliability and accuracy of our findings and to make informed decisions based on the data. It also allows for comparison between different experiments and helps to identify areas for improvement in future research.

Similar threads

  • Other Physics Topics
Replies
1
Views
2K
  • Introductory Physics Homework Help
Replies
15
Views
1K
Replies
7
Views
604
  • Set Theory, Logic, Probability, Statistics
Replies
22
Views
1K
  • Other Physics Topics
Replies
13
Views
3K
  • Classical Physics
Replies
6
Views
2K
Replies
1
Views
596
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
21
Views
2K
  • STEM Educators and Teaching
Replies
11
Views
2K
Back
Top