High School Simple error propagation questions

Click For Summary
The discussion revolves around uncertainty propagation as presented in an Italian high-school physics textbook, particularly two rules regarding repeated measurements and multiplying a measured quantity by an exact number. For repeated measures, the book suggests using "half-dispersion" as a simplified uncertainty estimate, but it emphasizes that this value cannot be less than the instrument's sensitivity, which can lead to inflated uncertainties. In the case of multiplying a measured quantity by a constant, the book states that the uncertainty scales with the constant, but this can result in unrealistic uncertainty values that exceed the actual measurement's precision. Participants express concern that these rules may confuse students and suggest that simpler, more intuitive approaches could be more effective for teaching. Overall, the conversation highlights the challenges of teaching error propagation to younger students while maintaining accuracy and clarity.
FranzDiCoccio
Messages
350
Reaction score
43
Hi,
I'm looking at an Italian high-school physics textbook. The subject is uncertainty propagation, and the target is 9th grade students. The book is allegedly by J.S. Walker, but I'm not sure how much it was "redacted" by the Italian editor.

I am a little puzzled by two rules that are stated in the book. I'd like to have your insight.
So, as I mention, the subject is uncertainty propagation. Nothing very complex. No sums in quadrature for errors, just the "worst cases" (the "provisional rules" in J.R. Taylor's book).
The two rules puzzling me are
  1. Repeated measures of the same quantity
  2. Measured quantity times an exact number
Case 1: the idea is that the same quantity is directly measured several times, using the same instrument.
The uncertainty of each single measure is taken to be the smallest difference that the instrument can measure (this is referred to as sensitivity of the instrument). So, for instance, 1 mm if the instrument is a pocket metric ruler.

Of course the best estimate for the measure is the mean of the measured values. However, the book gives a simplified rule for the uncertainty of the mean. The authors of the book probably thought that the standard deviation is too complex for student this age, and suggests that an estimate for the uncertainty in the measurements is what it calls "half-dispersion": that is half the difference between the largest and the smallest measure. This is a rough estimate, but it makes sense. There is a catch though. If the half-dispersion turns out to be less than the uncertainty in the measured quantities, the uncertainty for the mean is not the half-dispersion, but the sensitivity of the instrument.
I guess that this is to avoid a zero uncertainty when all the repeated measurements are the same...
Not pretty but it makes more or less sense.

Case 2. If a quantity is obtained by multiplying a measured quantity times an exact number "a", the uncertainty is said to be "a" times the uncertainty on the original measure. Again, this makes sense. If "a" is an integer, this is a generalization of the "simple" rule for sums of measured quantities.
Again, there is a catch: the uncertainty cannot be less than the sensitivity of the instrument used for the original measure.
This sounds strange to me. It kind of defeats one practical use of multiplying a measure times a constant.
I'm thinking e.g. of measuring the thickness of a sheet of paper by measuring the thickness of a stack of N (identical) sheets, and dividing by N.
In this case the uncertainty would be much larger than it reasonably is. Like if I have a thousand sheets 0.01 mm thick, I'd measure 1 cm for the total thickness, perhaps with an uncertainty of 1 mm (pocket ruler). The measure would be 0.01 mm, but its uncertainty would still be 1mm, i.e. 100 times larger.
Not a very precise measure.

Probably the point is that in this case one is not measuring the thickness of an individual object, but the thickness of the "average sheeet".

Can someone give me some more insight on these "catches" (if there is some)?
Thanks
 
Physics news on Phys.org
There are lots of ideas about estimating uncertainties - basically it is all guesswork of some kind or another. The guiding principle is that you should make the smallest guess that you can be very sure is bigger than the "actual" value. The "uncertainty on an uncertainty value" is typically very large. This principle seems to be uppermost in the author's mind - though I suspect you are correct to suspect an editor has shaped these sections too.

If I take x,y,z as measurements and a,b,c as constants, then the cases are:

1. there are n independent measurements of x, so estimate x to be: ##x=\mu_x \pm \frac{1}{\sqrt{n}}\sigma_x##
... this would suggest a shortcut: if the distribution is assumed very symmetric, of taking half the interquartile range or a quarter (or sixth) of the range of the data.
Half the range would represent an estimate for the 2##\sigma## confidence limits.
It may be a decent guess, for well behaved statistics, if the sample size is very small and the person taking the measurement is not very skilled.

2. here ##z=ax## so ##\sigma_z = |a|\sigma_x## provided ##a\sigma_x \geq \sigma_x## where ##\sigma_x## is the instrument uncertainty.
Clearly concerned abut the case that |a|<1. But is that really a problem?

Your example is good, but there is no reason the units have to match either: consider also: ##E=hc/\lambda## ... and we have a wavelength of 1m with uncertainty 0.001m ... considering hc is order ##10^{-25}##Jm, it makes no sense to quote the uncertainty on energy to 0.001 somethig... what J? What if I wanted energy in MeV - it seems that the rule would have the uncertainty in E vary a great deal depending on an arbitrary decision about units! However, it does make sense to say that ##\sigma_E/E \geq \sigma_\lambda/\lambda## ... does the text provide examples?

Notes: If all the measurements were the same (so the variation lies inside the instrument resolution), or there are a small number of measurements, then ##\sigma_x## would be half the instrument resolution ... that is ##\pm##0.5mm by a standard desktop ruler.

It is common for basic texts to use twice this value since they are anticipating that students will be sloppy in the placement of the 0mm mark (which a skilled user can set to within the width of the line indicating 0mm). Skilled users can get the instrument resolution down to 0.5mm ... giving 0.25mm or 0.3mm uncertainties. Not usually expected of 9th grade (13yo) students.

Just because the text says this does not mean it has to be taught that way.
 
Last edited:
  • Like
Likes BvU
Hi Simon,

thanks for your kind reply.
I think the the course is built to be "not too scary". Probably the idea is that some students would be scared by sigma and its definition.
After all, some students really seem to struggle with much simpler ideas.

I see your points about the "second case". I did not think of unit conversion, but it makes sense. However right now I do not see an example that would clarify the point to the average student. Anyway, to add something about the suggested "rule", it seems to me that it clashes with the intuition that a constant factor won't change the relative uncertainty. But, in my previous example, it would. Not sure I'm saying anything new.

I have to think whether to clarify this issue or let it go undetected. So far no student raised a point or made a question. I think a clarification would be worth while if we were to face the problem in the lab. I am not sure there will be an occasion soon, so perhaps it is not worth while to fill the students' heads with more rules that they won't practice anyway.

Thanks a lot again.
This was useful.
Francesco
 
In NZ error propagation is simply not taught at 9th grade level ... where data is likely to be messy, students are encouraged to graph is and sight a line of best fit.

For option 2, I'd just tell them the book is simply wrong on this point. Brighter kids can be encouraged to figure it out from the basic rules.

The basic rules are simple enough: add measurements means adding standard errors, multiplying measurements you add relative errors. Then there are rules of thumb that act as shortcuts because, ultimately, these are estimations (they have done a unit on estimation right?). They are old enough to cope with that and there is no scary maths... everything follows.
 
I learned that the uncertainty in a graduated measuring device (like a ruler or graduated cylinder) is 1/10 the spacing between the graduations. That's because your eye can usually discern a little better than the closest line. Realistically, I don't know if I can do 1/10 of a mm with a mm ruler, but 1/5 of a mm is no problem.
 
I do not have a good working knowledge of physics yet. I tried to piece this together but after researching this, I couldn’t figure out the correct laws of physics to combine to develop a formula to answer this question. Ex. 1 - A moving object impacts a static object at a constant velocity. Ex. 2 - A moving object impacts a static object at the same velocity but is accelerating at the moment of impact. Assuming the mass of the objects is the same and the velocity at the moment of impact...

Similar threads

  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 13 ·
Replies
13
Views
4K
Replies
15
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 7 ·
Replies
7
Views
1K
  • · Replies 22 ·
Replies
22
Views
2K
  • · Replies 6 ·
Replies
6
Views
5K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 6 ·
Replies
6
Views
4K
Replies
10
Views
7K