Does accuracy increase with repeated measurements and why

In summary: I think that's unlikely, unless the device is only intended for a very rough estimate.If something is advertised as "90% accurate" it would be reasonable to assume that means +/- 10% of the true value. In other words, if the true value is 5g, then the measurement would fall between 4.5 g and 5.5 g 90% of the time. This is not a very good device.
  • #1
gonnis
7
0
For example, I weigh a pebble and get a value of 7.1g. Then I measure again and get 7.6g, then again and get 6.9g.. Then I repeat the measurement using the same scale 5000 times. (its just a thought experiment). Is the mean of all measurements closer to the true value and if so, why? thanks
 
Mathematics news on Phys.org
  • #2
It depends on your measurement device. A mean/average is likely to be more accurate than half of your measurements (pretty much by definition, but of course the problem is that you won't know WHICH ones are off). You could be way off in all cases and just have a mean that is better than half the measurements.

I think it also depends on the stated precision of the answer relative to the accuracy of the measurements.

BUT ... take all of that with a grain of salt since I don't really remember much of my measurement theory course that I took about 50 years ago and I'm perfectly willing to just make stuff up :smile:
 
  • #3
Thanks for the answer phinds... what I am curious about is whether this is true in a general sense. That is, if the weighing scale is known to be accurate to say, +/- .01g, then does the mean of repeated measurements = more accuracy?
 
  • #4
gonnis said:
if the weighing scale is known to be accurate to say, +/- .01g, then does the mean of repeated measurements = more accuracy?

You've described a situation, but the description is not specific enough to be a mathematical question. To be a mathematical question, you'd first have to define what you mean by "more accuracy".

For example, if we are considering errors to be "random" in some sense then there are no mathematical guarantees about accuracy as a deterministic quantity. Applying mathematics would only produce statements about the probability of some event - like "the probability that the measured amount is within such-and-such of the actual weight is so-and-do".
 
  • #6
gonnis said:
Thanks for the answer phinds... what I am curious about is whether this is true in a general sense. That is, if the weighing scale is known to be accurate to say, +/- .01g, then does the mean of repeated measurements = more accuracy?

I would think about this in the following sense. There are two kinds of errors associated with measurements (I) random errors and (II) biases.

Random errors can influence your result in either direction and when the measurement is repeated, they will take on different values.

A bias is an error that, on repeated measurements will always influence the result in the same direction with the same magnitude. This might be the case if, in your example, the scale is incorrectly calibrated and always results in a measurement that is 2 g high of the true value.

You probably already know that if you measure a value many times and plot your results in a histogram, that histogram will likely have a bell-shape to it (or a normal distribution). This is because for the case of many, small potential random errors (as is commonly encountered in the real world) some will push the result up and others will push it down. In some rare, extreme cases all the random errors will point in the same direction putting you out on the tail of the distribution. More often, most of the errors will cancel out leaving you with a result that's closer to the true value.

Mathematically you can show that the mean value of this distribution is actually your best estimate for the real value of the parameter you are measuring. In fact, if you were to make multiple sets of N measurements, that would give you a set of mean values, those mean values will be normally distributed with a width that is characterized by the standard deviation over the root of those N measurements. (This is probably the answer that you're looking for - that the uncertainty in your best estimate of the true value is inversely proportional to the root of N - therefore more measurements improves your answer).

But it's important to remember that this distribution will not resolve any bias in your apparatus or measurement technique. In that sense, your best estimate will never be perfect, because it cannot eliminate bias.
 
  • #7
Choppy said:
Mathematically you can show that the mean value of this distribution is actually your best estimate for the real value of the parameter you are measuring. In fact, if you were to make multiple sets of N measurements, that would give you a set of mean values, those mean values will be normally distributed with a width that is characterized by the standard deviation over the root of those N measurements. (This is probably the answer that you're looking for - that the uncertainty in your best estimate of the true value is inversely proportional to the root of N - therefore more measurements improves your answer).
If the tip of the bell in a bell-shaped histogram is the best estimate of the true value, then doesn't the accuracy (the difference between measured value and actual value) of the average of all your measurements increase as the data points increase? So that if you had infinite amount of data points it would tend toward some value, which (excluding biases) would be the actual value? thanks for any more help
 
  • #8
Yes, as N goes to infinity, then the uncertainty in your best estimate falls to zero (excluding biases).
 
  • #9
OK thanks... so also what does it mean exactly when something is said to be "90% accurate?" Is it equivalent to saying "9 times out of 10 this measuring device will give the actual value" ?
 
  • #11
Yes, as N goes to infinity, then the uncertainty in your best estimate falls to zero (excluding biases).

Assuming independent trials. This might not always be the case in the real world because maybe you are causing wear and tear on your measuring device. Of course, that's why their are companies and institutions that deal with calibration, standards, and so on.
 
  • #12
gonnis said:
For example, I weigh a pebble and get a value of 7.1g. Then I measure again and get 7.6g, then again and get 6.9g.. Then I repeat the measurement using the same scale 5000 times. (its just a thought experiment). Is the mean of all measurements closer to the true value and if so, why? thanks
Well, that is really a long lecture. Let us just say that as long as you are doing repeated measurements of the same thing, the mean will give you a better estimate of the "true measurement". But (here is what people who talk about the mean usually forget):
  • The standard deviation gives you an estimate of how precise your measurements are (if you have a large standard deviation, your mean can be far off the "true value").
  • If you do repeated measurements of things that change over time (like temperature), the calculated mean does not give you a better estimate of the temperature at a given point in time.
Remember: When using statistics on measurements, the mean is just the result of a mathematical exercise. It does not add truth or precision to anything. What it does is describing a set of data in a "chunked" way - it can help you understand, but it can also lead to rash conclusions.
 
  • #13
Svein said:
  • The standard deviation gives you an estimate of how precise your measurements are (if you have a large standard deviation, your mean can be far off the "true value").
No, you can have a very small standard deviation around your measurements and be WAY off in accuracy if you measuring device is precise but inaccurate. A different device making the same measurements could give a larger standard deviation but be much more accurate if that device is accurate but less precise.
 
  • #14
Svein said:
Let us just say that as long as you are doing repeated measurements of the same thing, the mean will give you a better estimate of the "true measurement".
thanks... If I may use a different example then.. an automated blood pressure cuff. The repeated measurements are all automated so biases are less. If just one automated measurement is 99% accurate to the true blood pressure, then wouldn't the mean of 60 automated measurements over 60mins be more than 99% accurate, and if so, how do I express that with math?
also phinds I get the distinction b/w accuracy and precision (if I read your comment correctly) thanks
 
  • #15
phinds said:
No, you can have a very small standard deviation around your measurements and be WAY off in accuracy if you measuring device is precise but inaccurate. A different device making the same measurements could give a larger standard deviation but be much more accurate if that device is accurate but less precise.

As I said - calculating the mean is just a mathematical exercise. In the real world it may not even make sense.

Example: Flip a coin 100 times. assign a value of "1" to heads and "0" to tails. The mean will be very close 0.5. All measurements will be very precise. But the result does not make any physical sense at all.

Anecdote: When digital thermometers came out, people suddenly started referring to the temperature with one or two decimals (the resolution of the thermometer). In reality, no thermometer maker would state the accuracy of their product. I measured one such thermometer against a calibrated mercury thermometer - the digital thermometer offered two decimals but was about 1.5° off.
 
Last edited:

1. Does taking repeated measurements increase the accuracy of my results?

Yes, repeated measurements can increase the accuracy of your results. This is because each measurement may contain some degree of error, but by taking multiple measurements, the average of these values can reduce the overall error and provide a more accurate result.

2. How many repeated measurements should I take to increase the accuracy?

The number of repeated measurements needed to increase accuracy depends on the precision of your equipment and the variability of your data. Generally, the more repeated measurements you take, the more accurate your results will be. It is recommended to take at least three measurements and calculate the average for best results.

3. Is there a limit to how many repeated measurements I can take?

There is no set limit to the number of repeated measurements you can take. However, it is important to consider the time and resources needed to take and analyze a large number of measurements. It is also important to ensure that the measurements are independent of each other to avoid bias in the results.

4. Can repeated measurements increase precision as well as accuracy?

Yes, repeated measurements can increase both precision and accuracy. Precision refers to the consistency of results, and taking repeated measurements can reduce the variability and increase precision. This, in turn, can lead to a more accurate result.

5. Are there any disadvantages to taking repeated measurements?

One potential disadvantage of taking repeated measurements is the increased time and resources needed to collect and analyze the data. Additionally, if the measurements are not independent, they may introduce bias into the results. It is important to carefully plan and execute the repeated measurements to avoid these potential issues.

Similar threads

  • General Engineering
Replies
17
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
988
Replies
14
Views
1K
Replies
2
Views
715
  • Introductory Physics Homework Help
Replies
2
Views
291
  • Introductory Physics Homework Help
Replies
5
Views
575
Replies
9
Views
442
  • Other Physics Topics
Replies
25
Views
3K
  • Quantum Interpretations and Foundations
Replies
21
Views
751
Back
Top