Does accuracy increase with repeated measurements and why

Click For Summary
SUMMARY

The discussion centers on the relationship between repeated measurements and accuracy, specifically using a weighing scale with a known accuracy of +/- 0.01g. It is established that while the mean of repeated measurements can provide a better estimate of the true value, this improvement is contingent upon the absence of bias in the measurement device. The conversation highlights the distinction between random errors, which can average out, and biases, which consistently skew results. Ultimately, increasing the number of measurements reduces uncertainty, but does not eliminate bias, thus affecting overall accuracy.

PREREQUISITES
  • Understanding of measurement theory, including accuracy and precision
  • Familiarity with statistical concepts such as mean and standard deviation
  • Knowledge of random errors versus biases in measurements
  • Basic grasp of normal distribution and its implications in data analysis
NEXT STEPS
  • Explore the concept of "bias" in measurement and its impact on accuracy
  • Learn about statistical methods for analyzing measurement data, including confidence intervals
  • Investigate calibration techniques for measurement devices to ensure accuracy
  • Study the implications of sample size on the reliability of statistical estimates
USEFUL FOR

Researchers, statisticians, quality control professionals, and anyone involved in experimental design or measurement accuracy will benefit from this discussion.

gonnis
Messages
7
Reaction score
0
For example, I weigh a pebble and get a value of 7.1g. Then I measure again and get 7.6g, then again and get 6.9g.. Then I repeat the measurement using the same scale 5000 times. (its just a thought experiment). Is the mean of all measurements closer to the true value and if so, why? thanks
 
Physics news on Phys.org
It depends on your measurement device. A mean/average is likely to be more accurate than half of your measurements (pretty much by definition, but of course the problem is that you won't know WHICH ones are off). You could be way off in all cases and just have a mean that is better than half the measurements.

I think it also depends on the stated precision of the answer relative to the accuracy of the measurements.

BUT ... take all of that with a grain of salt since I don't really remember much of my measurement theory course that I took about 50 years ago and I'm perfectly willing to just make stuff up :smile:
 
Thanks for the answer phinds... what I am curious about is whether this is true in a general sense. That is, if the weighing scale is known to be accurate to say, +/- .01g, then does the mean of repeated measurements = more accuracy?
 
gonnis said:
if the weighing scale is known to be accurate to say, +/- .01g, then does the mean of repeated measurements = more accuracy?

You've described a situation, but the description is not specific enough to be a mathematical question. To be a mathematical question, you'd first have to define what you mean by "more accuracy".

For example, if we are considering errors to be "random" in some sense then there are no mathematical guarantees about accuracy as a deterministic quantity. Applying mathematics would only produce statements about the probability of some event - like "the probability that the measured amount is within such-and-such of the actual weight is so-and-do".
 
gonnis said:
Thanks for the answer phinds... what I am curious about is whether this is true in a general sense. That is, if the weighing scale is known to be accurate to say, +/- .01g, then does the mean of repeated measurements = more accuracy?

I would think about this in the following sense. There are two kinds of errors associated with measurements (I) random errors and (II) biases.

Random errors can influence your result in either direction and when the measurement is repeated, they will take on different values.

A bias is an error that, on repeated measurements will always influence the result in the same direction with the same magnitude. This might be the case if, in your example, the scale is incorrectly calibrated and always results in a measurement that is 2 g high of the true value.

You probably already know that if you measure a value many times and plot your results in a histogram, that histogram will likely have a bell-shape to it (or a normal distribution). This is because for the case of many, small potential random errors (as is commonly encountered in the real world) some will push the result up and others will push it down. In some rare, extreme cases all the random errors will point in the same direction putting you out on the tail of the distribution. More often, most of the errors will cancel out leaving you with a result that's closer to the true value.

Mathematically you can show that the mean value of this distribution is actually your best estimate for the real value of the parameter you are measuring. In fact, if you were to make multiple sets of N measurements, that would give you a set of mean values, those mean values will be normally distributed with a width that is characterized by the standard deviation over the root of those N measurements. (This is probably the answer that you're looking for - that the uncertainty in your best estimate of the true value is inversely proportional to the root of N - therefore more measurements improves your answer).

But it's important to remember that this distribution will not resolve any bias in your apparatus or measurement technique. In that sense, your best estimate will never be perfect, because it cannot eliminate bias.
 
Choppy said:
Mathematically you can show that the mean value of this distribution is actually your best estimate for the real value of the parameter you are measuring. In fact, if you were to make multiple sets of N measurements, that would give you a set of mean values, those mean values will be normally distributed with a width that is characterized by the standard deviation over the root of those N measurements. (This is probably the answer that you're looking for - that the uncertainty in your best estimate of the true value is inversely proportional to the root of N - therefore more measurements improves your answer).
If the tip of the bell in a bell-shaped histogram is the best estimate of the true value, then doesn't the accuracy (the difference between measured value and actual value) of the average of all your measurements increase as the data points increase? So that if you had infinite amount of data points it would tend toward some value, which (excluding biases) would be the actual value? thanks for any more help
 
Yes, as N goes to infinity, then the uncertainty in your best estimate falls to zero (excluding biases).
 
OK thanks... so also what does it mean exactly when something is said to be "90% accurate?" Is it equivalent to saying "9 times out of 10 this measuring device will give the actual value" ?
 
  • #11
Yes, as N goes to infinity, then the uncertainty in your best estimate falls to zero (excluding biases).

Assuming independent trials. This might not always be the case in the real world because maybe you are causing wear and tear on your measuring device. Of course, that's why their are companies and institutions that deal with calibration, standards, and so on.
 
  • #12
gonnis said:
For example, I weigh a pebble and get a value of 7.1g. Then I measure again and get 7.6g, then again and get 6.9g.. Then I repeat the measurement using the same scale 5000 times. (its just a thought experiment). Is the mean of all measurements closer to the true value and if so, why? thanks
Well, that is really a long lecture. Let us just say that as long as you are doing repeated measurements of the same thing, the mean will give you a better estimate of the "true measurement". But (here is what people who talk about the mean usually forget):
  • The standard deviation gives you an estimate of how precise your measurements are (if you have a large standard deviation, your mean can be far off the "true value").
  • If you do repeated measurements of things that change over time (like temperature), the calculated mean does not give you a better estimate of the temperature at a given point in time.
Remember: When using statistics on measurements, the mean is just the result of a mathematical exercise. It does not add truth or precision to anything. What it does is describing a set of data in a "chunked" way - it can help you understand, but it can also lead to rash conclusions.
 
  • #13
Svein said:
  • The standard deviation gives you an estimate of how precise your measurements are (if you have a large standard deviation, your mean can be far off the "true value").
No, you can have a very small standard deviation around your measurements and be WAY off in accuracy if you measuring device is precise but inaccurate. A different device making the same measurements could give a larger standard deviation but be much more accurate if that device is accurate but less precise.
 
  • #14
Svein said:
Let us just say that as long as you are doing repeated measurements of the same thing, the mean will give you a better estimate of the "true measurement".
thanks... If I may use a different example then.. an automated blood pressure cuff. The repeated measurements are all automated so biases are less. If just one automated measurement is 99% accurate to the true blood pressure, then wouldn't the mean of 60 automated measurements over 60mins be more than 99% accurate, and if so, how do I express that with math?
also phinds I get the distinction b/w accuracy and precision (if I read your comment correctly) thanks
 
  • #15
phinds said:
No, you can have a very small standard deviation around your measurements and be WAY off in accuracy if you measuring device is precise but inaccurate. A different device making the same measurements could give a larger standard deviation but be much more accurate if that device is accurate but less precise.

As I said - calculating the mean is just a mathematical exercise. In the real world it may not even make sense.

Example: Flip a coin 100 times. assign a value of "1" to heads and "0" to tails. The mean will be very close 0.5. All measurements will be very precise. But the result does not make any physical sense at all.

Anecdote: When digital thermometers came out, people suddenly started referring to the temperature with one or two decimals (the resolution of the thermometer). In reality, no thermometer maker would state the accuracy of their product. I measured one such thermometer against a calibrated mercury thermometer - the digital thermometer offered two decimals but was about 1.5° off.
 
Last edited:

Similar threads

  • · Replies 17 ·
Replies
17
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 15 ·
Replies
15
Views
4K
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 25 ·
Replies
25
Views
4K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 12 ·
Replies
12
Views
5K
  • · Replies 1 ·
Replies
1
Views
1K