When Do I Use Reading Error in Measurement Uncertainty?

Click For Summary
In measuring uncertainties, the reading error of an instrument, such as a stopwatch, should be considered alongside the variability from multiple trials. When calculating uncertainty, the maximum and minimum readings can provide a range, but the reading error must also be factored in, especially if it is significant compared to the variability observed. Systematic errors, which may arise from consistent biases in measurement, should be identified separately as they can affect the overall accuracy. For periodic measurements, averaging over multiple cycles can help mitigate the impact of both reading and systematic errors. Ultimately, a comprehensive approach that includes both types of errors will yield a more accurate estimation of measurement uncertainty.
Darren Byrne
Messages
2
Reaction score
0

Homework Statement



Hi, I have a question about uncertainties. If I am carrying out an investigation I need to record my uncertainties in each measurement. I am aware that instruments have reading errors. So, if I use a stopwatch to measure the time period of a pendulum the reading error is ±0.01s, the smallest reading. But if I am measuring the time period of a pendulum I would do several trials and to find the uncertainty then I will take my max reading and my min reading and divide by two. So, my question is: when do I use the reading error? Do I add it on to the error calculated using max - min or is it already included. We always do more than one trial measurement, if I use max-min/2 then when do I use the reading error?[/B]

Homework Equations


max reading - min reading / 2

The Attempt at a Solution



I think my thinking through the problem written above is an attempt at a solution. Also, it's more a general question on uncertainties than a specific problem.

Cheers

[/B]
 
Physics news on Phys.org
If you use the times you read off to estimate the uncertainty, the uncertainty from reading the time is included already - assuming the former is significantly larger than the latter.
Darren Byrne said:
But if I am measuring the time period of a pendulum I would do several trials and to find the uncertainty then I will take my max reading and my min reading and divide by two.
The average over all measurements is much better.
 
What is important is to determine the measurement error.
If the "read error" on your stop watch is 0.01sec, that's worth noting - but the actual error will also include your reaction time.

In the case of a periodic motion where it is known that each period is the same as the one before, the measuring several periods at once gives you a big advantage. Your measurement error is probably going to stay the same because the stop watch is inherently stable or long periods of time.
For example, let's say that you measure one cycle as 2.08 seconds. Then you measure 100 cycles at 199.90 seconds. You would have to conclude, that your measurement error was at least 0.08 seconds and that the actual value you are measuring is pretty close to 2.000 seconds.

It is possible for the read error to be critical. For example, let's say that you made 100 readings and 2.00 was reported every time. You would then conclude that the period was 2.00 +/- 0.01 (the read error). Even though the standard deviation is less (0<0.01).
 
mfb said:
The average over all measurements is much better.
Did you mean standard deviation? The method described was to find the error range, not the central value.
mfb said:
If you use the times you read off to estimate the uncertainty, the uncertainty from reading the time is included already
... except for systematic errors.
As @.Scott notes, systematic errors can arise in different ways: an inherent delay in reading the time, or a rounding error that tends to be always in the same direction. Neither max-min nor standard deviation will pick those up.

Suppose we assume there are two sources of error, a rounding error from the known granularity of the measurement and an unknown but unbiased error from some other source.
If the first dominates, as .scott notes, you might get very consistent readings, so you need to base your error estimate on the known granularity.
If the second dominates, the rounding error can go either way so also becomes unbiased. In this case you can take the observed variation as embodying both sources.
 
I was concerned about the mean first. Well, mean and standard deviation for central value and uncertainty.
haruspex said:
... except for systematic errors.
The 0.01s reading uncertainty discussed is not a systematic error (unless we get the same reading nearly every time).

You can reduce the systematic uncertainty by taking an integer multiple of periods, that way you start and stop the watch at the same event (zero crossing, ideally). If I remember correctly I got something like 0.03s-0.05s repeatability in that lab experiment, enough to make the timing uncertainty small compared to systematic uncertainties from the length measurement.
 
mfb said:
The 0.01s reading uncertainty discussed is not a systematic error (unless we get the same reading nearly every time).
The point I was making was that it could be a systematic error if other sources of error are smaller. The original question was whether to estimate the error from the variance in the readings or from the known granularity. A general answer is that you need to encompass both, but a general formula is not obvious to me.
mfb said:
You can reduce the systematic uncertainty by taking an integer multiple of periods,
Good idea. I wonder if the ideal, for a given total number of periods, is to use two periods with the Golden Ratio.
 
Golden Ratio between what?

What I did back then: I measured a single period 10 times, my lab partner measured a single period 10 times. We used this to estimate our timing uncertainty.
Then we measured 100 periods four times (or something like that). The pendulum had a length of about a meter, which leads to a period of about 0.5s. A ~0.04s timing uncertainty is about 0.1% uncertainty after 100 periods, averaged over four measurements it goes down to 0.05%.

The estimate for the moment of inertia had a larger uncertainty, mainly from the length measurement.

Something else to consider: The period does depend on the amplitude - take it into account if you want an accurate measurement of g.
 
mfb said:
Golden Ratio between what?
Between the two periods. E.g. if you allocate time to measure only eight oscillations, time three then five. This could be completely wrong, just a hunch based on the fact that this ratio is the toughest irrational to approximate with rationals.
 
Unless you have a reason to expect that you have things happening every n oscillations, I don't see how that would help.
 

Similar threads

Replies
15
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
Replies
4
Views
4K
  • · Replies 8 ·
Replies
8
Views
2K
Replies
4
Views
3K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
1
Views
2K
Replies
2
Views
1K
Replies
4
Views
1K