WHat is the uncertainty in a metre rule??

 Quote by Studiot Dickfore. If the result could be reported as 2 or 3 that is an uncertainty of 1.
http://en.wikipedia.org/wiki/Plus-mi...ion_indication

Recognitions:
Homework Help
 Quote by truesearch I am concerned when I read in post 6 (a PF Mentor) that measurements can be made to within +/- 0.1mm using a mm scale.
As a standard deviation, 0.2 is easy to achieve and 0.1 might be possible (all in mm).

As an example for 0.2 standard deviation, this means that 0.3 is usually (~70%) read somewhere between 0.1 and 0.5. It is sufficient to see that 0.3 is smaller than 0.5, but not close to 0, to do this.
0.5 should usually be read somewhere between 0.3 and 0.7 - which is everything not close to a mark on the scale.

If you want to give some "upper bound" for the error, you should use larger values, of course. But an upper bound is not always well-defined (apart from digital displays). And if you want to use the marks on the scale, you should format your numbers like $0.3^{+0.7}_{-0.3}mm$

 Yes indeed using ± notation. But if you are limited to scale divisions you cannot report half divisions, your statement has to be 2 or 3, not 2.5 ± 0.5. That implies you have to report 0,1,2,3,4 etc. So a report that the thickness is 2 implies that it is between 1 and 3 ie 2 ±1 Which is what I said. Incidentally you need to revise your statement on end standards.

 Quote by mfb As an example for 0.2 standard deviation, this means that 0.3 is usually (~70%) read somewhere between 0.1 and 0.5. It is sufficient to see that 0.3 is smaller than 0.5, but not close to 0, to do this.
I guess you meant 0.2 in the bolded number. Nevertheless, this is only true for a normal distribution. When we measure with a coarse scale such that we always get the length to be between the same two divisions, the error is not of statistical nature, and uncertainty has a different meaning from standard deviation. For a uniform distribution with ends a, and b, the standard deviation is:
$$\sigma_{U} = \frac{b - a}{2 \sqrt{3}}$$
Notice that [itex]2 \sqrt{3} \approx 3.5[/tex]

 Quote by Studiot Yes indeed using ± notation. But if you are limited to scale divisions you cannot report half divisions, your statement has to be 2 or 3, not 2.5 ± 0.5. That implies you have to report 0,1,2,3,4 etc. So a report that the thickness is 2 implies that it is between 1 and 3 ie 2 ±1 Which is what I said. Incidentally you need to revise your statement on end standards.
You can report half divisions, as it is customary in experimental physics.

As for end standards, we are not doing calibration of etalons. We are measuring the length of an object. Thus, we are free to slide the scale so that the left end coincides exactly with one of the ruler's divisions. Then, there is uncertainty in reading off only the right end.

There might be systematic errors due to the bad calibration of the rulers divisions, but that's another point.

 As for end standards, we are not doing calibration of etalons. We are measuring the length of an object. Thus, we are free to slide the scale so that the left end coincides exactly with one of the ruler's divisions. Then, there is uncertainty in reading off only the right end.
This is a fundamental error. The process of sliding still constitutes a 'reading' or alignment error. It is not as accurate as the engineering process of aligning an end standard ruler, even with a comparator microscope which engineers also use.

And BTW why mention etalons?

 There is no way anyone could say +/- 0.1mm
Actually there is but you require a draftsman's scale rule with diagonal scales. Have you heard of these?
However I have not seen one as long as 1 metre.

 Last few comments are missing the point and rely on unsubstantiated assumptions. Trying to see something between divisions which is not there! Assuming that there is some uniform scale within the division Assuming there is no distortion. The example of digital instruments should serve as a clue.... There is no way to 'eyeball' how close the last digit is to the one above or the one below. +/-1 max is a safe, sensible, objective bet.

 Quote by Studiot This is a fundamental error. The process of sliding still constitutes a 'reading' or alignment error.
Even if it does consists of an error, the error is of the order of the width of the mark, and not of the order of half the distance between two marks. I don't know about your rulers, but the marks on mine are pretty thin.

 I have used draughts mans scales ......I have also used verniers
 Recognitions: Science Advisor truesearch, error is not the absolute maximum or minimum you can be wrong by. It's the average error. So it behaves like a random walk. For a random walk of N steps of distance L each, the average distance you travel is L*Sqrt(N). So average error behaves like RMS. In practice, the actual value of how much you are off by will be normally distributed. The quoted error is the standard deviation of that distribution.

 Even if it does consists of an error, the error is of the order of the width of the mark, and not of the order of half the distance between two marks. I don't know about your rulers, but the marks on mine are pretty thin.
So why can't you 'read' the other end to the same precision?

As a matter of inerest how do you guarantee that the aligned 'zero' stays put while you read the other end?

I know how the navy does it for a traverse tape and how an engineering workshop does it for an engineering endstop rule and similarly how a drapers shop does it for a drapers endstop rule. Why do you think they do it this way with an end stop rather than your way?

 I have used draughts mans scales
So you know they are commonly calibrated in 1/100 inch or 0.1mm?

 Quote by Studiot So why can't you 'read' the other end to the same precision?
Because not all lengths in Nature are integer multiples of the divisions of our scale.

 Quote by Studiot As a matter of inerest how do you guarantee that the aligned 'zero' stays put while you read the other end?
You don't. But, you are describing sources of error that are order of magnitude smaller than the precision of the scale of the measuring instrument.

In fact, if the sources you allude to start giving comparable contributions, then it means your measuring scale is so precise that you do not get the same result as you repeat the measurement. In other words, you start getting statistical errors.

 Quote by Studiot I know how the navy does it for a traverse tape and how an engineering workshop does it for an engineering endstop rule and similarly how a drapers shop does it for a drapers endstop rule. Why do you think they do it this way with an end stop rather than your way?
Probably to account for the fact that in these cases the object being measured is violently moved during the measuring process. This, on the other hand, happens rarely in a Physics Lab.

 Dickfore all you are proving is that different people in different 'laboratories' use different techniques and thereby achieve (slightly) different results by going their own way. The whole object of calibration and standardisation is so that anyone anywhere can achieve the same result under the same conditions. This involves standardisation of measurement technique as well as tools in order to remove 'operator bias'. Measurement against a common stop end is one such standard. If a laboratory develops its own special techniques it needs to report these as part of the results. I once worked in such a laboratory measuring the lengths of bricks, accurate to less than a 10thou using the lab's specially developed technique. But we never pretended it was 'standard' or that the method should be widely adopted.

 Quote by Studiot The whole object of calibration and standardisation is so that anyone anywhere can achieve the same result under the same conditions.
Provided that the results are reported to same precision! What you are describing in the previous posts is comparing the result of a measurement done by a school ruler, to that of a micrometer screw gauge.

Let us say that (by the method I described), I get a length measurement of 3.5±0.5 mm.

Then, you come with your fancy equipment and get a result 3.329±0.007 mm (you had to repeat the experiment several times because you noticed that every time you get a different reading with your fine equipment. Then, you took the mean, and you found the standard deviation of the mean, and you took a 95% confidence interval for the mean.)

Does that make my measurement "wrong"?

Mentor
 Quote by truesearch I am concerned when I read in post 6 (a PF Mentor) that measurements can be made to within +/- 0.1mm using a mm scale.
Ultimately, the proper ± figure for scale-reading uncertainty depends on the person who is making the measurement and the instrument that he is using, and that person has to make a judgment about this.

I feel confident in assigning ±0.1 mm when using a metal scale with finely-engraved lines, in a way that eliminates or minimizes parallax error due to the thickness of the scale. It helps that I'm rather nearsighted so I can get my eye about 10 cm from the scale if I take off my eyeglasses. If I'm using a typical plastic ruler with relatively thick mm-lines, I might use ±0.2 mm. If I'm using a thick meter stick and can't lay it edgewise on the object being measured so that I have to sight across the thickness of the meter stick, I might use ±0.5 mm or even ±1.0 mm.

 Not at all to do with statistics. But everything to do with technique which contains inherent sources of error v technique which avoids these. When you place your test piece and ruler against the stop end you have a guaranteed square and reproducible 'zero'. When you estimate the alignment of two lines along a sight line that may or may not be square and hold the ruler and testpiece at some random (albeit small) angle to each other you have a recipe for variability of measurement. Notice I said 'sight line'. Two operators will align the pieces slightly differently by sight. They cannot do this with a stop end. Edit JTbell has just described the visual alignment issue to a T whilst I was posting.

 Quote by jtbell Ultimately, the proper ± figure for scale-reading uncertainty depends on the person who is making the measurement...
No. Wrong experimental procedure leads to systematic errors that are not a measure of the uncertainty.

 Quote by jtbell ...and the instrument that he is using,...
Yes. But:

 Quote by jtbell I feel confident in assigning ±0.1 mm when using a metal scale with finely-engraved lines, in a way that eliminates or minimizes parallax error due to the thickness of the scale.
This is definitely wrong if the "finely-engraved lines" are a distance 1 mm apart, as given in the OP.

 Tags metre rule, uncertainity