Homework Statement

Hi, I have a question about uncertainties. If I am carrying out an investigation I need to record my uncertainties in each measurement. I am aware that instruments have reading errors. So, if I use a stopwatch to measure the time period of a pendulum the reading error is ±0.01s, the smallest reading. But if I am measuring the time period of a pendulum I would do several trials and to find the uncertainty then I will take my max reading and my min reading and divide by two. So, my question is: when do I use the reading error? Do I add it on to the error calculated using max - min or is it already included. We always do more than one trial measurement, if I use max-min/2 then when do I use the reading error?[/B]

The Attempt at a Solution

I think my thinking through the problem written above is an attempt at a solution. Also, it's more a general question on uncertainties than a specific problem.

Cheers

[/B]

Related Introductory Physics Homework Help News on Phys.org
mfb
Mentor
If you use the times you read off to estimate the uncertainty, the uncertainty from reading the time is included already - assuming the former is significantly larger than the latter.
But if I am measuring the time period of a pendulum I would do several trials and to find the uncertainty then I will take my max reading and my min reading and divide by two.
The average over all measurements is much better.

.Scott
Homework Helper
What is important is to determine the measurement error.
If the "read error" on your stop watch is 0.01sec, that's worth noting - but the actual error will also include your reaction time.

In the case of a periodic motion where it is known that each period is the same as the one before, the measuring several periods at once gives you a big advantage. Your measurement error is probably going to stay the same because the stop watch is inherently stable or long periods of time.
For example, let's say that you measure one cycle as 2.08 seconds. Then you measure 100 cycles at 199.90 seconds. You would have to conclude, that your measurement error was at least 0.08 seconds and that the actual value you are measuring is pretty close to 2.000 seconds.

It is possible for the read error to be critical. For example, let's say that you made 100 readings and 2.00 was reported every time. You would then conclude that the period was 2.00 +/- 0.01 (the read error). Even though the standard deviation is less (0<0.01).

haruspex
Homework Helper
Gold Member
The average over all measurements is much better.
Did you mean standard deviation? The method described was to find the error range, not the central value.
If you use the times you read off to estimate the uncertainty, the uncertainty from reading the time is included already
... except for systematic errors.
As @.Scott notes, systematic errors can arise in different ways: an inherent delay in reading the time, or a rounding error that tends to be always in the same direction. Neither max-min nor standard deviation will pick those up.

Suppose we assume there are two sources of error, a rounding error from the known granularity of the measurement and an unknown but unbiased error from some other source.
If the first dominates, as .scott notes, you might get very consistent readings, so you need to base your error estimate on the known granularity.
If the second dominates, the rounding error can go either way so also becomes unbiased. In this case you can take the observed variation as embodying both sources.

mfb
Mentor
I was concerned about the mean first. Well, mean and standard deviation for central value and uncertainty.
... except for systematic errors.
The 0.01s reading uncertainty discussed is not a systematic error (unless we get the same reading nearly every time).

You can reduce the systematic uncertainty by taking an integer multiple of periods, that way you start and stop the watch at the same event (zero crossing, ideally). If I remember correctly I got something like 0.03s-0.05s repeatability in that lab experiment, enough to make the timing uncertainty small compared to systematic uncertainties from the length measurement.

haruspex
Homework Helper
Gold Member
The 0.01s reading uncertainty discussed is not a systematic error (unless we get the same reading nearly every time).
The point I was making was that it could be a systematic error if other sources of error are smaller. The original question was whether to estimate the error from the variance in the readings or from the known granularity. A general answer is that you need to encompass both, but a general formula is not obvious to me.
You can reduce the systematic uncertainty by taking an integer multiple of periods,
Good idea. I wonder if the ideal, for a given total number of periods, is to use two periods with the Golden Ratio.

mfb
Mentor
Golden Ratio between what?

What I did back then: I measured a single period 10 times, my lab partner measured a single period 10 times. We used this to estimate our timing uncertainty.
Then we measured 100 periods four times (or something like that). The pendulum had a length of about a meter, which leads to a period of about 0.5s. A ~0.04s timing uncertainty is about 0.1% uncertainty after 100 periods, averaged over four measurements it goes down to 0.05%.

The estimate for the moment of inertia had a larger uncertainty, mainly from the length measurement.

Something else to consider: The period does depend on the amplitude - take it into account if you want an accurate measurement of g.

haruspex