1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Counting uncertainty in pendulum experiment?

  1. Aug 8, 2014 #1
    Hi, I am confused about when the rule for counting uncertainty applies. I know for radioactivity experiments one expresses the uncertainty (error) in the decay count as the square root of the count. So if you counted n decays you would report an average rate of

    [itex]n \pm \sqrt{n}[/itex]

    I was wondering if the same technique applies to, for example, a pendulum whose frequency is calculated based on the number of oscillations in a given time. So perhaps you just watch while holding a stopwatch, or maybe you use a photogate and allow the computer to record breaks in the voltage.

    A. The reason I think it doesn't is because the pendulum oscillations are not random like radioactivity. If that is the case, what is the procedure for correctly reporting uncertainty in the frequency calculation?

    B. The reason I think it might still apply is because using the square root of the count gives an average rate. Like, if one repeated the decay experiment many times, you would certainly NOT get the same specific count each time. However, the discrepancy between the average rate would likely be insignificant. Similarly with the pendulum, in a crude setup (like in a high school lab) wouldn't you expect variation in the specific counts yet an existent average rate?

    C. This is a bit of a tangent, but how would the error reporting scheme (whatever it is) change if I used the method of counting with my eyes and a stopwatch vs a photogate attached to a computer?

    Thanks for any insight.
    Last edited: Aug 8, 2014
  2. jcsd
  3. Aug 8, 2014 #2


    User Avatar

    Staff: Mentor

    First imagine an experiment that counts the number of decays from a sample during a fixed time interval, say one minute. Repeat it many many times. (Of course the sample's half-life has to be long enough that we don't have to worry about its activity decreasing during the course of all these trials.). If we find that the average number of counts over all the trials is 100, we should find that about 2/3 of the trials lie in the range from 100 - √100 to 100 + √100, i.e. 90 to 110. That's what the "√n rule" means. And about 1/3 of the trials will be outside this range, i.e. even further from the average.

    Now imagine doing the same thing with your pendulum apparatus. Have it count the number of cycles during a long enough time interval that you average 100 cycles per trial. Repeat it many many times. I would be astonished if the variation between trials was large enough that you have to use a range of 90 to 110 to encompass 2/3 of the trials.

    More precisely, the standard deviation of the number of counts in the radioactive-decay experiment can be shown to be √n, starting from fundamental assumptions that apply to all radioactive-decay processes.

    The standard deviation of the number of pendulum swings depends mainly on the details of your particular apparatus and experimental procedure. For example, how do you start and stop the timer, with relation to the pendulum swing? This is something that you can and should measure experimentally.
  4. Aug 9, 2014 #3
    Thanks, by "about 2/3" I assume you mean 68.27%, and that it is related to statistical analysis of a normal curve. I indeed hadn't realized that was where the counting rule came from. (edit: sorry, im just now reading the derivation is from the Poisson distribution, not a normal curve)

    When measuring a length with a typical ruler, I was taught to report to the extreme of the instrument plus one more estimated digit. So, on a typical ruler that includes only millimeter graduations, a measurement would be something like 2.58 cm. You could then estimate your uncertainty as being some fraction of a millimeter. For example I could probably distinguish quarters of a millimeter but no more. So my measurement would be 2.58 [itex]\pm[/itex] 0.3.

    That type of reasoning was based on single measurement and involved no statistics. Is there no analogue for counting?
    Last edited: Aug 9, 2014
  5. Aug 9, 2014 #4


    User Avatar
    Science Advisor
    Gold Member
    2017 Award

    I have also been confused about the statistics used in the rate of 'events'.
    As I was told, the Poisson distribution relates to relatively rare events and their probability. Those sorts of measurements do not have a well defined mean until there are a lot of them. Isn't it true that the Poisson distribution for large numbers approaches the normal distribution? For timing a pendulum, you very soon (after a very few counts) get a good idea of the mean and SD and it just gets better as you do more - but it doesn't tend to change much. If you did the pendulum in a bus on a bumpy road, I guess your results would be more poisson-like :smile: until you did a large number of them.
  6. Aug 9, 2014 #5

    Vanadium 50

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2017 Award

    You can certainly count the number of swings the pendulum can make. So if you count 100 swings, you know that it is no smaller than 101 and no bigger than 99. But you can do better than that - you can distinguish between when the pendulum is at its highest point on the left from the highest point on the right, so you can count half-swings. By the same token, you know when the pendulum is at the bottom, so you can count quarter swings. And maybe even eighth swings. So I would argue the uncertainty in the number of swings is somewhere in the neighborhood of 0.2
  7. Aug 9, 2014 #6


    User Avatar
    Science Advisor
    Gold Member
    2017 Award

    You could use a knife edge and an optical detector to determine when the swing crossed the lowest point with great accuracy and the spread of measured periods could then be dominated by the breeze, slamming doors etc. (Those two influences would be much more Poisson-like) But you would 'know', to a small uncertainty the mean value after just a few swings. Otoh, counting the first few occasional clicks from a GM tube would give you very little idea of the mean interval to expect after 10000 clicks; the measured mean would migrate slowly to a limiting value. Poisson would be the only way to describe the results.
    Different statistics suit different circumstances.
  8. Aug 9, 2014 #7
    You have a systematic error with both, most likely less with the computer and photogate but if the electronics are wonky, then it could also be a large error. Here, if your setup is to have the timer and release of the pendulum occur simultaniously, then it is your job to make sure both events happens as close as possible to one another. Certainly, one can easily see that releasing the bob and pressing the watch at the same time by a human is difficult to do, but counting more swings of the pendulum over a longer time will minimize the impact of this particular error of synchronization.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Similar Discussions: Counting uncertainty in pendulum experiment?
  1. Pendulum experiment (Replies: 5)

  2. Pendulum ? (Replies: 4)