Data Analysis: Gamma ray attenuation

Click For Summary
SUMMARY

This discussion centers on the analysis of gamma ray attenuation data, specifically addressing issues with statistical methods applied to photon count data from an experiment. The experiment involved measuring photon counts over two minutes with varying material thicknesses, resulting in challenges due to identical counts leading to zero standard deviation. The resolution involved using the square root of the number of counts as the error estimate, derived from the Poisson distribution, which is standard practice in counting experiments. Participants emphasized the need for a more thorough statistical treatment in educational settings.

PREREQUISITES
  • Understanding of Poisson distribution and its application in counting experiments
  • Familiarity with statistical methods for error propagation
  • Experience with curve fitting techniques for data analysis
  • Knowledge of gamma ray detection and measurement principles
NEXT STEPS
  • Research the application of Poisson statistics in experimental physics
  • Learn about advanced error propagation techniques in statistical analysis
  • Explore software tools for curve fitting, such as Python's SciPy or MATLAB
  • Study the principles of gamma ray attenuation and its implications in material science
USEFUL FOR

Researchers, physicists, and data analysts involved in experimental physics, particularly those working with radiation detection and statistical data analysis.

DeShark
Messages
149
Reaction score
0
Hello, I'm attempting to analyse the data recovered from an experiment that I performed in lab, but I'm having some problems understanding how to properly apply the statistical methods learned to this specific problem.

Essentially, the experiment consisted of placing a source of gamma rays near to a detector and counting the number of photons detected in an interval of 2 minutes. We then placed increasing thicknesses of a given material and observed how the number of photons counted (in 2 minutes) was affected.

As such I have a table of values of width versus number of counts. Due to time constraints, we could only perform the experiment twice, so for each thickness, I have two values for the number of photons counted. I figured that I could take the average of these values and the standard deviation and use that to plot a curve with error bars. The problem is that two of the values have exactly the same number of counts (this seemed insanely unlikely given that the number of counts was on the order of 500 and the other values for standard deviation are around 30/40), so for these 2 widths, I have a standard deviation of zero.

Looking at the curve, it seemed apparent that an exponential curve would be a good fit, so that is what I am attempting to do. However, my curve fitting program is throwing up massive problems due to the two values with "zero" error. I anticipate the error to increase with the number of counts, so I can't just set a constant error value. So my trouble is how to resolve in a sound manner these crazy zero errors that I'm finding. Any help would be enormously useful because I'm at an utter loss as to what to do here!
 
Physics news on Phys.org
Did you ever resolve this issue?

One might consider looking at the systematic error from the detector and then making a good faith estimate of the error induced in likelihood of getting the same number of counts. Interesting question none-the-less.
 
Norman said:
Did you ever resolve this issue?

One might consider looking at the systematic error from the detector and then making a good faith estimate of the error induced in likelihood of getting the same number of counts. Interesting question none-the-less.

In the end I presumed that the number of counts has an error of the square root of the number of counts (I heard this is what to do for some reason I don't understand at all). When I took that into account, it was a matter of finding the weighted mean along with the associated error. I have no idea why we assume that the error on the number of counts from a radioactive source is the square root of the number of counts. If anyone could explain, it'd be useful.

As it was I just blindly wandered down the statistical process that I've been taught somewhat rushed to get to an answer which seemed reasonable. It annoys me that we never do a proper statistical treatment. We had an 8 week course on data analysis and error propagation in first year, but I hardly think 8 hours of lectures suffices. Ah well...
 
That method comes from the Poisson distribution. If you expect an average rate, call it \lambda of something to occur and the counts in your experiment over a period t are random, then the distribution is
P(n) = \frac{(\lambda t)^n e^{-\lambda t}}{n!},
where n is the number counted and P(n) is the probability of getting n counts. You can compute
<br /> \begin{equation*}<br /> \begin{split}<br /> \langle n\rangle &amp;= \Sigma_0^\infty nP(n) \\<br /> \langle n^2\rangle &amp;= \Sigma_0^\infty n^2P(n) \\<br /> \end{split}<br /> \end{equation*}<br />
to find that the mean value of this distribution and its standard deviation are
<br /> \begin{equation*}<br /> \begin{split}<br /> \mu &amp;= \lambda t \\<br /> \sigma &amp;= \sqrt{\lambda t}\\<br /> \end{split}<br /> \end{equation*}<br />

When you do a counting experiment like this, your result is your best guess for what the mean actually is. Your uncertainty is the square root of your number of counts.
 
Last edited:

Similar threads

Replies
28
Views
4K
  • · Replies 16 ·
Replies
16
Views
2K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 37 ·
2
Replies
37
Views
5K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 13 ·
Replies
13
Views
2K