I am a little confused about the justification behind what we do with things we believe to be poisson distributed. Take for instance, gamma-decay counts from a scintillation detector. Suppose we got 100 counts using a 1 minute integration time. We would then assume that the distribution of counts that occur in a 1 minute interval should be poisson distributed--I will refer to this would-be distribution as the parent distribution. Furthermore, we take the 100 measured counts to be a reasonable estimate of the mean of the parent distribution and thus approximate the standard deviation as sqrt(100)=10. We then report our measurement as 100+-10 counts.(adsbygoogle = window.adsbygoogle || []).push({});

Now, it does not seem right to me to say that the uncertainty is +-10. Suppose for a moment that we had knowledge of the exact number of decays that occurred (i.e. there is no uncertainty in our measurement). If we were to take this measurement multiple times we would still get a poisson distribution of counts (since the probability of a single atom decaying is very small...etc). We would then have a poisson distribution of data points where each data point has no uncertainty associated with it. Now what are we to report as our result? The mean? The median? We could report either, in fact we could report any value that occurred in the distribution and we would be correct since each value in the distribution occurredexactly(i.e. no associated uncertainty). But I suppose that to describe the range of values that occurred most frequently, we would report the mean+-standard_dev. Now here we have not given an uncertainty, per se. The 'error bars' we have listed (+-standard_dev) result simply from how we decided to describe the data. There was no one true value, so we just decided to list a number representative of the predominate values and gave a range to describe the spread of the values. This type of 'uncertainty' is not at all what we mean when we talk about experimental uncertainty. Experimental uncertainty has to do with the question of knowledge. "How well do we know that our reported value describes the value that actually occurred," is the question we are faced with when considering experimental uncertainty. In the above example, however, we had complete knowledge of how many decays actually occurred, so the experimental uncertainty is zero. The range/error bars we reported were simply a concise way to describe a bunch of different measurements which took on a specific distribution. So, it seems that reporting +-standard_deviation of the parent distribution does not give us an experimental uncertainty. The type of thing we want for an experimental uncertainty answers questions like "with your scintillation detector, how sure are you that that thing actually measures scintillations accurately? Does it randomly throw in a scintillation or two? What about ambient fluctuating noise? What about the effects of the magnetic field of earth and what about the fact that your lab partner just drooled on the detector? Taking all of these things into account, how well do you know that the value you measured represents the value that actually occurred?" This question, however, is not answered by giving the standard deviation since there would still be a non-zero standard deviation of the count distribution if no experimental uncertainty whatsoever existed.

That was rather long winded, and I appreciate anyone who took the time to read it through. Any help would be appreciated. Thanks.

**Physics Forums | Science Articles, Homework Help, Discussion**

The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

# Standard Deviation not equal to experimental uncertainty?

Loading...

Similar Threads for Standard Deviation equal |
---|

A Sample Test | Component Lifetime |

I Understanding the transformation of skewness formula |

I Standard Deviation Versus Sample Size & T-Distribution |

I Standard deviation of data after data treatment |

**Physics Forums | Science Articles, Homework Help, Discussion**