Standard Deviation not equal to experimental uncertainty?

AI Thread Summary
The discussion centers on the distinction between standard deviation and experimental uncertainty in the context of Poisson-distributed data, such as gamma-decay counts from a scintillation detector. It argues that while standard deviation can describe the spread of counts, it does not equate to experimental uncertainty, which relates to how accurately a measurement reflects the true value. The participants highlight that even with complete knowledge of decay events, the inherent randomness of the counting process still results in a non-zero standard deviation. They emphasize the importance of considering both Type A (random) and Type B (systematic) uncertainties when reporting measurements. Ultimately, the conversation underscores that standard deviation reflects variability in the data rather than the accuracy of the measurement itself.
AcidRainLiTE
Messages
89
Reaction score
2
I am a little confused about the justification behind what we do with things we believe to be poisson distributed. Take for instance, gamma-decay counts from a scintillation detector. Suppose we got 100 counts using a 1 minute integration time. We would then assume that the distribution of counts that occur in a 1 minute interval should be poisson distributed--I will refer to this would-be distribution as the parent distribution. Furthermore, we take the 100 measured counts to be a reasonable estimate of the mean of the parent distribution and thus approximate the standard deviation as sqrt(100)=10. We then report our measurement as 100+-10 counts.

Now, it does not seem right to me to say that the uncertainty is +-10. Suppose for a moment that we had knowledge of the exact number of decays that occurred (i.e. there is no uncertainty in our measurement). If we were to take this measurement multiple times we would still get a poisson distribution of counts (since the probability of a single atom decaying is very small...etc). We would then have a poisson distribution of data points where each data point has no uncertainty associated with it. Now what are we to report as our result? The mean? The median? We could report either, in fact we could report any value that occurred in the distribution and we would be correct since each value in the distribution occurred exactly (i.e. no associated uncertainty). But I suppose that to describe the range of values that occurred most frequently, we would report the mean+-standard_dev. Now here we have not given an uncertainty, per se. The 'error bars' we have listed (+-standard_dev) result simply from how we decided to describe the data. There was no one true value, so we just decided to list a number representative of the predominate values and gave a range to describe the spread of the values. This type of 'uncertainty' is not at all what we mean when we talk about experimental uncertainty. Experimental uncertainty has to do with the question of knowledge. "How well do we know that our reported value describes the value that actually occurred," is the question we are faced with when considering experimental uncertainty. In the above example, however, we had complete knowledge of how many decays actually occurred, so the experimental uncertainty is zero. The range/error bars we reported were simply a concise way to describe a bunch of different measurements which took on a specific distribution. So, it seems that reporting +-standard_deviation of the parent distribution does not give us an experimental uncertainty. The type of thing we want for an experimental uncertainty answers questions like "with your scintillation detector, how sure are you that that thing actually measures scintillations accurately? Does it randomly throw in a scintillation or two? What about ambient fluctuating noise? What about the effects of the magnetic field of Earth and what about the fact that your lab partner just drooled on the detector? Taking all of these things into account, how well do you know that the value you measured represents the value that actually occurred?" This question, however, is not answered by giving the standard deviation since there would still be a non-zero standard deviation of the count distribution if no experimental uncertainty whatsoever existed.
That was rather long winded, and I appreciate anyone who took the time to read it through. Any help would be appreciated. Thanks.
 
Physics news on Phys.org
AcidRainLiTE said:
We then report our measurement as 100+-10 counts.
Are you sure you're not reporting your estimation of the true mean as 100 +- 10 counts?

(P.S. yes, that was a long paragraph, and I didn't make it through the wall of text before giving up)
 
"This question, however, is not answered by giving the standard deviation since there would still be a non-zero standard deviation of the count distribution if no experimental uncertainty whatsoever existed."

Correct.

Consider the following thought experiment. You decide to watch the experiment behind an opaque panel. During the first experiment the opacity is maximal, so you miss lots of occurrences; perhaps you overcorrect for the opacity and "imagine" occurrences where there were none. Gradually with each experiment you reduce the opacity. In the final experiment there is perfect clarity and measurement error is zero (what you described in your post). That does not mean that the standard deviation is zero. Why? Because there is a random element that "decides" whether the next occurrence should happen exactly 1 second, or (say) 1.0001 seconds, or 0.9999 seconds after the last one. (Alternatively, the random element might be deciding anywhere between 1.01 vs. 0.99 seconds.) What gives rise to the standard deviation is this innate randomness (and how spread out can get when it "decides" when the next occurrence should happen).
 
Last edited:
EnumaElish, that makes sense. So that would seem to mean that reporting the standard deviation as sqrt(counts) for count data is really only an accurate estimate of 'uncertainty' (in the sense of possible variation from the reported value) if we assume that the equipment/method introduces no (or at least minimal) 'uncertainty.' Unless maybe the uncertainty introduced by the equipment is also poisson.
Hurkyl said:
Are you sure you're not reporting your estimation of the true mean as 100 +- 10 counts?
That is possible, and reading something from Bevington's "Data Reduction and Error Analysis' makes me think you are correct: "For a given number of observations, the uncertainty in determining the mean...is proportional to the standard deviation..."

However, it makes a little more sense to me that the standard deviation is just a result of the innate randomness of the process, as EnumaElish said.

Thanks for the replies.
 
Last edited:
According to GUM of BIPM, any measurement doesn't produce the true value to the measurand, but only the estimation of it. The random uncertainty is caused randomly and is called the Type A uncertainty; And things like how well the scintillation detector detects an electron refers as the Type B uncertainty. It depends on how well you want to describe your result, otherwise you should not simply ignore the Type B uncertainty. I am not familiar with a scintillation detector, but as a measuring equipment would you look at the manufacturer's specification or user manual to see any hints for the Type B uncertainty?
 
AcidRainLiTE said:
However, it makes a little more sense to me that the standard deviation is just a result of the innate randomness of the process, as EnumaElish said.
That is exactly what Hurkyl said. Your measurement, even if it were perfect, is not a perfect indicator of the mean because the underlying process itself is rather noisy. Run the experiment a couple more times and you might well get 120 and 90 counts as the results. Measurement noise and process noise are quite different beasts.
 
I was reading documentation about the soundness and completeness of logic formal systems. Consider the following $$\vdash_S \phi$$ where ##S## is the proof-system making part the formal system and ##\phi## is a wff (well formed formula) of the formal language. Note the blank on left of the turnstile symbol ##\vdash_S##, as far as I can tell it actually represents the empty set. So what does it mean ? I guess it actually means ##\phi## is a theorem of the formal system, i.e. there is a...

Similar threads

Replies
3
Views
2K
Replies
1
Views
2K
Replies
4
Views
2K
Replies
2
Views
2K
Replies
19
Views
2K
Replies
18
Views
3K
Back
Top