- #1
FranzDiCoccio
- 342
- 41
Hi,
I'm looking at an Italian high-school physics textbook. The subject is uncertainty propagation, and the target is 9th grade students. The book is allegedly by J.S. Walker, but I'm not sure how much it was "redacted" by the Italian editor.
I am a little puzzled by two rules that are stated in the book. I'd like to have your insight.
So, as I mention, the subject is uncertainty propagation. Nothing very complex. No sums in quadrature for errors, just the "worst cases" (the "provisional rules" in J.R. Taylor's book).
The two rules puzzling me are
The uncertainty of each single measure is taken to be the smallest difference that the instrument can measure (this is referred to as sensitivity of the instrument). So, for instance, 1 mm if the instrument is a pocket metric ruler.
Of course the best estimate for the measure is the mean of the measured values. However, the book gives a simplified rule for the uncertainty of the mean. The authors of the book probably thought that the standard deviation is too complex for student this age, and suggests that an estimate for the uncertainty in the measurements is what it calls "half-dispersion": that is half the difference between the largest and the smallest measure. This is a rough estimate, but it makes sense. There is a catch though. If the half-dispersion turns out to be less than the uncertainty in the measured quantities, the uncertainty for the mean is not the half-dispersion, but the sensitivity of the instrument.
I guess that this is to avoid a zero uncertainty when all the repeated measurements are the same...
Not pretty but it makes more or less sense.
Case 2. If a quantity is obtained by multiplying a measured quantity times an exact number "a", the uncertainty is said to be "a" times the uncertainty on the original measure. Again, this makes sense. If "a" is an integer, this is a generalization of the "simple" rule for sums of measured quantities.
Again, there is a catch: the uncertainty cannot be less than the sensitivity of the instrument used for the original measure.
This sounds strange to me. It kind of defeats one practical use of multiplying a measure times a constant.
I'm thinking e.g. of measuring the thickness of a sheet of paper by measuring the thickness of a stack of N (identical) sheets, and dividing by N.
In this case the uncertainty would be much larger than it reasonably is. Like if I have a thousand sheets 0.01 mm thick, I'd measure 1 cm for the total thickness, perhaps with an uncertainty of 1 mm (pocket ruler). The measure would be 0.01 mm, but its uncertainty would still be 1mm, i.e. 100 times larger.
Not a very precise measure.
Probably the point is that in this case one is not measuring the thickness of an individual object, but the thickness of the "average sheeet".
Can someone give me some more insight on these "catches" (if there is some)?
Thanks
I'm looking at an Italian high-school physics textbook. The subject is uncertainty propagation, and the target is 9th grade students. The book is allegedly by J.S. Walker, but I'm not sure how much it was "redacted" by the Italian editor.
I am a little puzzled by two rules that are stated in the book. I'd like to have your insight.
So, as I mention, the subject is uncertainty propagation. Nothing very complex. No sums in quadrature for errors, just the "worst cases" (the "provisional rules" in J.R. Taylor's book).
The two rules puzzling me are
- Repeated measures of the same quantity
- Measured quantity times an exact number
The uncertainty of each single measure is taken to be the smallest difference that the instrument can measure (this is referred to as sensitivity of the instrument). So, for instance, 1 mm if the instrument is a pocket metric ruler.
Of course the best estimate for the measure is the mean of the measured values. However, the book gives a simplified rule for the uncertainty of the mean. The authors of the book probably thought that the standard deviation is too complex for student this age, and suggests that an estimate for the uncertainty in the measurements is what it calls "half-dispersion": that is half the difference between the largest and the smallest measure. This is a rough estimate, but it makes sense. There is a catch though. If the half-dispersion turns out to be less than the uncertainty in the measured quantities, the uncertainty for the mean is not the half-dispersion, but the sensitivity of the instrument.
I guess that this is to avoid a zero uncertainty when all the repeated measurements are the same...
Not pretty but it makes more or less sense.
Case 2. If a quantity is obtained by multiplying a measured quantity times an exact number "a", the uncertainty is said to be "a" times the uncertainty on the original measure. Again, this makes sense. If "a" is an integer, this is a generalization of the "simple" rule for sums of measured quantities.
Again, there is a catch: the uncertainty cannot be less than the sensitivity of the instrument used for the original measure.
This sounds strange to me. It kind of defeats one practical use of multiplying a measure times a constant.
I'm thinking e.g. of measuring the thickness of a sheet of paper by measuring the thickness of a stack of N (identical) sheets, and dividing by N.
In this case the uncertainty would be much larger than it reasonably is. Like if I have a thousand sheets 0.01 mm thick, I'd measure 1 cm for the total thickness, perhaps with an uncertainty of 1 mm (pocket ruler). The measure would be 0.01 mm, but its uncertainty would still be 1mm, i.e. 100 times larger.
Not a very precise measure.
Probably the point is that in this case one is not measuring the thickness of an individual object, but the thickness of the "average sheeet".
Can someone give me some more insight on these "catches" (if there is some)?
Thanks