Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Difference between uncertainty and precision?

  1. Sep 9, 2016 #1
    Is precision the smallest division in measurement instruments?
    Is uncertainty one-tenth or a half of the smallest division in measurement instruments? I'm confused some say one-tenth, some say a half...
    Thanks in advance
     
  2. jcsd
  3. Sep 9, 2016 #2

    BvU

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    It's much more involved than that: If a weighing scale is badly calibrated the precision suffers. If the zero of the scale is off, precision suffers.
    Same answer. I take it you refer to instruments with needles (for most digital readings the one-tenth doesn't apply, right?). Take a simple ruler, preferably with a resonable mm division): can you distinguish between 33.6 and 33.7 mm ? The 1/10 is probably too optimistic and the 1/2 seems a bit too coarse.
     
  4. Sep 9, 2016 #3
    Is there a standard formula (convention) for this??
     
  5. Sep 9, 2016 #4

    Nidum

    User Avatar
    Science Advisor
    Gold Member

    In engineering the permitted range of deviation from an ideal value is usually specified as a tolerance band .

    For instance 35 mm +/- 0.1 mm
     
    Last edited: Sep 9, 2016
  6. Sep 9, 2016 #5

    BvU

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    I looked it up in G. L. Squires, Practical Physics (mine is dutch translation from 1968 :smile:) and there it says a ruler has a precision of 0.2 mm, BUT you have to avoid the parallax error (by looking really vertically) AND you have to take two measurements: one at the left, one at the right. So there you have a factor ##\sqrt 2## (if the measurements are independent) already. And sure enough you end up smack halfway between the 0.5 and the 0.1 mm !

    There's no strict standards; common sense is our best tool, followed by experience; a good dose of paranoia when experimenting comes in handy.
     
  7. Oct 2, 2016 #6

    Ranger Mike

    User Avatar
    Science Advisor

    Precision is the fineness of measurement.
    the gage maker rule of ten was introduced in the 1930s.


    Gage Maker’s “ Rule of Ten”


    Before the introduction of the Coordinate Measurement Machine, and associated Windows based Software, common methods of measurement were fraught with measurement errors. These may be categorized as the four sources of measurement error; inherent instrument error, observational error, manipulative error and bias. To dramatically reduce these measurement errors, the “ Ten-to-One Rule” was developed.


    Rule: The instrument must be capable of dividing the tolerance into ten parts.



    The Purpose: To eliminate 99% of the instrumentation error of previous steps in measurement.



    When Applied: To every step in the measurement sequence until the limit of the available instrument is reached.



    The results: Fewer bad parts accepted and good parts rejected.



    This whole concept was to reduce the zone of uncertainty and achieve reliable measurement. Before the Digital Age, this Rule was applied to Vernier Metrology Instruments such as Micrometers, Scales and Calipers. Let us return to the four sources of measurement error discussed above.


    Observational Error – This error is all but eliminated with introduction of inexpensive DRO metrology instruments. Errors caused by misreading the lines on a micrometer have been eliminated (within reason) with the DRO device.


    Manipulative error – Operator influence during measurement is dramatically reduced when an electrical Touch Probe is used to acquire data. It is still possible to “ crank down” a micrometer barrel when measuring a part, however, this act becomes an intentional event and not an error. When the operator chooses to completely ignore the ratchet screw attached to the micrometer barrel, during the measurement process, error is not random.


    Bias – Unconscious influence during measurement used to be a problem. Most precision measurements required several readings, which were averaged. The obvious wrong ones were thrown out. This was an open invitation to bias. Again, the introduction to low cost Digital devices much eliminated these errors.


    Instrument error – Common practice says, “the measuring instrument should be ten times as accurate as the part”. What does this really mean? What is the goal? When you finally narrow the requirement, the goal is to limit the amount of instrument error that can creep into the measurement, to one percent. Restated the rule is simply: The instrument should divide the tolerance into ten parts. Note that the rule stated applies only to the precision of the instrument, not the accuracy. This is because the accuracy is derived from the standard. If the standard is not accurate, all is lost. The Fineness of measurement described above relates to Precision of measurement and more directly, the Resolution of the instrument.






    Why the Ten to One Ratio – This rule was established in an attempt to ensure that all instrument readings would fall within the zone of uncertainty of the instrument. This simply means that over 99% of all measurement readings would show up on the meter. This was statistically determined based on the 3 Sigma confidence level. All repeat errors or dispersion of readings would appear on the meter or readout device. In the old days, inspectors wanted the dial indicator or gage meter to be able to display the Part Error and the instrument error so that all possible problems were covered. I am sure you can understand the main problem with this as tolerances tighten up. You may be discarding “ bad parts” based upon the error of the instrument.



    CMM and the “Rule of Ten”-



    If one were to carry the 10 to 1 rule to its extreme, a typical CMM with volumetric accuracy of .00025” would only be able to measure parts with 0.0025” tolerance. If the part tolerance is .0005”, then a CMM with accuracy of .000050” would be required along with a $ 250,000 price tag. No one that I know of adheres to the “ Rule of Ten” when discussing CMM’s. So how do we measure a CMMs Accuracy and Repeatability?


    CMM Accuracy



    From its introduction until the implementation of ERROR MAPPING, the accuracy of a CMM was measured using a Laser Interferometer. The laser beam was “ bucked” (made parallel) to the X axis and the readings over the entire length were observed. The laser reflector was usually positioned mid axis on the Y and Z-axis. Typical accuracy was stated as Linear accuracy +/- .0002” over X-axis. The procedure was repeated on the Y & Z-axis. With the adoption of the ANSI B89 standard, the Ball Bar was added to measure “ Volumetric accuracy. A ball bar looks like a dumb bell. Two datum balls are attached with a length of bar. One “ ball” is located on a magnetic mount that permits rotation. The other ball is locked at various positions on the CMM and numerous measurements of the ball are made. This method of measurement had several flaws. VDI-VDE 2617 was added in an attempt to qualify measuring uncertainty of the CMM. This standard provides for a 95% confidence interval and evaluates linear accuracy specified as U1 and volumetric accuracy specified as U3. In each case, the specification provides a 95% confidence, which means 5% of the observations can be excluded from the testing data. This means "flyers" are tossed which effectively allowed machine accuracy statements to report better accuracy than could actually be achieved.




    Understanding the basics of ISO 10-360-2

    The ISO 10-360-2 specification changed all of this recognizing that in practical every day measurement the cmm use does not have the luxury of excluding "flyers" and generally users do not even know when flyers are observed. As such the ISO spec requires 100% of all observations to be included in the evaluation of the cmm.


    Under ISO 10-360-2, the machine is evaluated in at least 3 areas. MPEE is volumetric length measurement utilizing calibrated gage blocks. Ball bars are not used as their length is arbitrary and only the spheres can be calibrated. Moreover, the measurement of a sphere employs many points to resolve the center, which does not represent practical
    https://www.physicsforums.com/file:///C:/Users/MIKEBI~1/AppData/Local/Temp/msohtmlclip1/01/clip_image001.png [Broken]​
    measurement. In practical measurement, point measurements are taken as required for the respective part feature with the expectation each point is accurate. There is no luxury to measure multiple points and resolve (1) one point to be used in the part feature calculation. Under ISO 10-360-2 discrete single points are measured, bi-directionally to evaluate length and the MPEE value reports the range of measurements from seven different positions with five different gage lengths, repeated 3 times. All 105 measurements with their deviation from certified lengths are considered and are provided as an Uncertainty of measured length.


    https://www.physicsforums.com/file:///C:/Users/MIKEBI~1/AppData/Local/Temp/msohtmlclip1/01/clip_image002.png [Broken]​



    The next ISO 10-360-2 evaluation is MPEP which is Probing uncertainty. The machine measures 25 discrete points and is evaluated as 25 individual radii. The range of radii variation , min to max, is the MPEP value.


    https://www.physicsforums.com/file:///C:/Users/MIKEBI~1/AppData/Local/Temp/msohtmlclip1/01/clip_image004.jpg [Broken]​
    The next metric in the new Wenzel LH brochure is MPETHP which is similar to MPEE however this is achieved by full contact scanning of 4 lines, of which, only one can be a full 360 degree equatorial scan. This evaluation comes from another ISO evaluation, ISO 10-360-4, and is specific to full contact scanning. As above the points with in each scan are treated as radii and the range of radii deviation, min to max, is reported.








    https://www.physicsforums.com/file:///C:/Users/MIKEBI~1/AppData/Local/Temp/msohtmlclip1/01/clip_image006.png [Broken]















    Sources



    Practical Metrology K.J. Hume, G.H. Sharpe, London, Macdonald & Co. Ltd.


    Quality Control Handbook J.M. Juran, New York, McGraw-Hill Book Co. Inc.


    Fundamentals of Dimensional Metrology Ted Busch, Delmar Publishers Inc.

    Inspection and Gaging Kennedy-Hoffman-Bond, New York, Industrial Press Inc

    and Gaging Kennedy-Hoffman-Bond, New York, Industrial Press Inc
     
    Last edited by a moderator: May 8, 2017
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted