1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Definition of Accuracy

  1. May 4, 2015 #1
    Hey, i am trying to teach accuracy and precision. I understand precision to mean two things. How repeatable measurements are (variation), and how specific measurements are (sensitivity).

    I am having a bigger problem defining accuracy. Most places i look define accuracy as how close measurements are to some true value. However, there is no 'true' value in science. Only a value that is accepted to be 'true'. How accurate are true values? How do we determine the accuracy of these values?
  2. jcsd
  3. May 4, 2015 #2
    My interpretation is that after experiments are conducted and the same result is yielded, to some order of magnitude, it becomes accepted that the experiment should yield that value that is accepted. Only rigorous testing concludes that the speed of light is roughly 3E8 m/s in a true sense. However, you can derive the speed of light using other variables that also have accepted values through experimentation or derivation. That's my take on it anyway.

    Oh, also to the point that no measurement is absolute/true in experimentation: That is correct but we generally stop at some order of magnitude as it is unnecessary for our purposes to know the 90th decimal value of pi when we can just use 3.141592. (that is unless you are a mathematician)
  4. May 4, 2015 #3
    That's a good understanding of what precision means.

    In a sense there is no way of knowing what the 'true value' of any measurement is, but that doesn't mean they don't exist. For example, I can imaging that there truly is a single value for the charge of an electron. When Millikan first performed the oil drop experiment to measure this charge he was able to state the results in terms of the precision he thought he had attained. The experiment and others like it was repeated by others and we now have the current accepted value for the charge on the electron. Is this the true value? No - there is uncertainty (and good texts will include the uncertainty). But, because there have been many measurements of the charge of an electron that support each other we can be confident that the 'true' value for the charge is somewhere in the accepted value's range.

    Interestingly, Millikan's result is not in agreement with today's accepted value. If memory serves me correctly I believe he used a value for the viscosity of air that was incorrect and this was the source of systematic uncertainty that threw off his results (which was not noticed until some time after the experiment, and others like it, were performed). So accepted values are determined by comparing careful experiments.

    Another example you might be interested in is that of the universal gravitation constant, G. This constant has been notoriously difficult to pin down. Even recent measurements have been significantly different - on the order of tenths of a percent (which is much higher than any other professionally measured value). See http://www.npl.washington.edu/eotwash/bigG [Broken]article
    Last edited by a moderator: May 7, 2017
  5. May 5, 2015 #4


    User Avatar
    Science Advisor
    Gold Member

    From a practical point of view I believe you can safely assume that the "true value" in this context refers to the international standards that are maintained by National Measurement Institutes (such as NIST, PTB, NPL etc). They regularly publish values for constants via CODATA, and all measurement devices can (or at least should) trace their calibration to one of those NMIs (sometimes instruments are supplied with a calibration certificate), so when discussing accuracy of an instrument you are really referring to the national standards (unless you are measuring e.g. the speed of light, which is a constant and has an exact value by definition).

    Note also that this is -for most measurements- of little practical importance, the various standards at the NMIs can in most cases be measured with a precision that is at least two orders of magnitude (often more) better than what you would find a "consumer" instrument. Hence, the accuracy of a normal instrument will never be limited because of the realization of the standard. The only time this is an issue is if you are working with e.g. optical atomic clocks which are much more precise (about 3 orders of magnitude) than the Cs-137 time standard for the second, but can of course not really be said to be more accurate.
  6. May 5, 2015 #5
    Thanks for the information guys! Just one more question. Would i be correct in stating that accuracy refers to the systematic/deterministic error of an experiment, while saying that precision refers to the random/nondeterministic error?
  7. May 5, 2015 #6
    I think that is a good characterization. The typical analogy is that of the dartboard. If your darts (measurements) are tightly clustered they are precise. If they are on or near the bullseye they are accurate.
  8. May 5, 2015 #7
    Accuracy involves the assessment of all uncertainties and precision involves the assessment of uncontrollable fluctuations in measurements. So accuracy and precision are interrelated. It is not rational to make a measurement to a high precision if the result was known to be inaccurate. Also it would be similarly ridiculous to quote a measurement as accurate if the precision was low. Accuracy must involve the precision of the measurement.
  9. May 5, 2015 #8
    But yes, it is hard to make estimates about the accuracy when you don't know "ground truth" to compare against. You are left with trying to eliminate any possible error, and then looking at the distribution of the measurements.
  10. May 5, 2015 #9
    But if you knew the "ground truth" what is the purpose of the measurement?
  11. May 5, 2015 #10
    Well, calibrating your gear for example.
  12. May 5, 2015 #11
    To extend the dartboard analogy just a little; simply imagine that the board itself is invisible. Indeed, one would trust tightly clustered darts as a better representation of the true value than ones that were spread out more. However, it would be hard to choose one set of tight measurements over another set of tight measurements if there was a significant discrepancy between the two sets without knowing something about the way the measurements were taken (and even then it might not be clear if one or both of the sets of measurements were in error). A nice illustration of how difficult it can be to detect systematic error.
  13. May 5, 2015 #12
    Case in point, I'm sure the guys who claimed the faster-than-light neutrino had a neatly cluster of measurements above c. Only that the systematic error completely put it in the wrong ballpark.
  14. May 5, 2015 #13


    User Avatar
    Science Advisor

    My favorite example: Some years ago I bought a digital thermometer. It presented the temperature with two decimals (precision). But when comparing it to a calibrated mercury thermometer, it was about 1 degree wrong (accuracy).

    Another example (presented to me at the university in a lecture on measurement theory): In the early 60's an american set a new world in javelin throw. This record was (naturally) measured in feet and inches. When presenting this in a Norwegian newspaper, the journalist had carefully converted it to metric as 77.35721m. The two last digits subdivide a grain of sand, and the mark a javelin makes upon landing is about 30cm long!

    Another real-world example: I once designed and implemented a measurement system for measuring the depth of several measurement points below the sea. When I asked the customer what precision he needed, the answer was "better than 1cm". Problem was, on a very calm day, the waves were about 0.5m (peak-to-trough). So, better than 1cm relative to what?
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook