1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Does accuracy increase with repeated measurements and why

  1. Jan 22, 2015 #1
    For example, I weigh a pebble and get a value of 7.1g. Then I measure again and get 7.6g, then again and get 6.9g.. Then I repeat the measurement using the same scale 5000 times. (its just a thought experiment). Is the mean of all measurements closer to the true value and if so, why? thanks
     
  2. jcsd
  3. Jan 22, 2015 #2

    phinds

    User Avatar
    Gold Member
    2016 Award

    It depends on your measurement device. A mean/average is likely to be more accurate than half of your measurements (pretty much by definition, but of course the problem is that you won't know WHICH ones are off). You could be way off in all cases and just have a mean that is better than half the measurements.

    I think it also depends on the stated precision of the answer relative to the accuracy of the measurements.

    BUT ... take all of that with a grain of salt since I don't really remember much of my measurement theory course that I took about 50 years ago and I'm perfectly willing to just make stuff up :smile:
     
  4. Jan 22, 2015 #3
    Thanks for the answer phinds... what im curious about is whether this is true in a general sense. That is, if the weighing scale is known to be accurate to say, +/- .01g, then does the mean of repeated measurements = more accuracy?
     
  5. Jan 22, 2015 #4

    Stephen Tashi

    User Avatar
    Science Advisor

    You've described a situation, but the description is not specific enough to be a mathematical question. To be a mathematical question, you'd first have to define what you mean by "more accuracy".

    For example, if we are considering errors to be "random" in some sense then there are no mathematical guarantees about accuracy as a deterministic quantity. Applying mathematics would only produce statements about the probability of some event - like "the probability that the measured amount is within such-and-such of the actual weight is so-and-do".
     
  6. Jan 22, 2015 #5

    nsaspook

    User Avatar
    Science Advisor

  7. Jan 22, 2015 #6

    Choppy

    User Avatar
    Science Advisor
    Education Advisor

    I would think about this in the following sense. There are two kinds of errors associated with measurements (I) random errors and (II) biases.

    Random errors can influence your result in either direction and when the measurement is repeated, they will take on different values.

    A bias is an error that, on repeated measurements will always influence the result in the same direction with the same magnitude. This might be the case if, in your example, the scale is incorrectly calibrated and always results in a measurement that is 2 g high of the true value.

    You probably already know that if you measure a value many times and plot your results in a histogram, that histogram will likely have a bell-shape to it (or a normal distribution). This is because for the case of many, small potential random errors (as is commonly encountered in the real world) some will push the result up and others will push it down. In some rare, extreme cases all the random errors will point in the same direction putting you out on the tail of the distribution. More often, most of the errors will cancel out leaving you with a result that's closer to the true value.

    Mathematically you can show that the mean value of this distribution is actually your best estimate for the real value of the parameter you are measuring. In fact, if you were to make multiple sets of N measurements, that would give you a set of mean values, those mean values will be normally distributed with a width that is characterized by the standard deviation over the root of those N measurements. (This is probably the answer that you're looking for - that the uncertainty in your best estimate of the true value is inversely proportional to the root of N - therefore more measurements improves your answer).

    But it's important to remember that this distribution will not resolve any bias in your apparatus or measurement technique. In that sense, your best estimate will never be perfect, because it cannot eliminate bias.
     
  8. Jan 22, 2015 #7
    If the tip of the bell in a bell-shaped histogram is the best estimate of the true value, then doesn't the accuracy (the difference between measured value and actual value) of the average of all your measurements increase as the data points increase? So that if you had infinite amount of data points it would tend toward some value, which (excluding biases) would be the actual value? thanks for any more help
     
  9. Jan 22, 2015 #8

    Choppy

    User Avatar
    Science Advisor
    Education Advisor

    Yes, as N goes to infinity, then the uncertainty in your best estimate falls to zero (excluding biases).
     
  10. Jan 22, 2015 #9
    OK thanks... so also what does it mean exactly when something is said to be "90% accurate?" Is it equivalent to saying "9 times out of 10 this measuring device will give the actual value" ?
     
  11. Jan 23, 2015 #10

    nsaspook

    User Avatar
    Science Advisor

  12. Jan 23, 2015 #11
    Assuming independent trials. This might not always be the case in the real world because maybe you are causing wear and tear on your measuring device. Of course, that's why their are companies and institutions that deal with calibration, standards, and so on.
     
  13. Jan 25, 2015 #12

    Svein

    User Avatar
    Science Advisor

    Well, that is really a long lecture. Let us just say that as long as you are doing repeated measurements of the same thing, the mean will give you a better estimate of the "true measurement". But (here is what people who talk about the mean usually forget):
    • The standard deviation gives you an estimate of how precise your measurements are (if you have a large standard deviation, your mean can be far off the "true value").
    • If you do repeated measurements of things that change over time (like temperature), the calculated mean does not give you a better estimate of the temperature at a given point in time.
    Remember: When using statistics on measurements, the mean is just the result of a mathematical exercise. It does not add truth or precision to anything. What it does is describing a set of data in a "chunked" way - it can help you understand, but it can also lead to rash conclusions.
     
  14. Jan 25, 2015 #13

    phinds

    User Avatar
    Gold Member
    2016 Award

    No, you can have a very small standard deviation around your measurements and be WAY off in accuracy if you measuring device is precise but inaccurate. A different device making the same measurements could give a larger standard deviation but be much more accurate if that device is accurate but less precise.
     
  15. Jan 25, 2015 #14
    thanks... If I may use a different example then.. an automated blood pressure cuff. The repeated measurements are all automated so biases are less. If just one automated measurement is 99% accurate to the true blood pressure, then wouldn't the mean of 60 automated measurements over 60mins be more than 99% accurate, and if so, how do I express that with math?
    also phinds I get the distinction b/w accuracy and precision (if I read your comment correctly) thanks
     
  16. Jan 26, 2015 #15

    Svein

    User Avatar
    Science Advisor

    As I said - calculating the mean is just a mathematical exercise. In the real world it may not even make sense.

    Example: Flip a coin 100 times. assign a value of "1" to heads and "0" to tails. The mean will be very close 0.5. All measurements will be very precise. But the result does not make any physical sense at all.

    Anecdote: When digital thermometers came out, people suddenly started referring to the temperature with one or two decimals (the resolution of the thermometer). In reality, no thermometer maker would state the accuracy of their product. I measured one such thermometer against a calibrated mercury thermometer - the digital thermometer offered two decimals but was about 1.5° off.
     
    Last edited: Jan 26, 2015
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Does accuracy increase with repeated measurements and why
  1. Why does this work? (Replies: 4)

  2. Why does a = -a? (Replies: 3)

Loading...