1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Discussing an experiment (radioactivity, Geiger-counter)

  1. Jun 28, 2016 #1
    1. The problem statement, all variables and given/known data

    Hi everybody! My homework this week is to discuss the results we obtained in an experiment last week, which was about determining the gamma-rays absorption coefficient of lead ##\mu## with a Geiger-counter and then read from a graph the gamma photon energy of the radioactive material caesium-137.

    For the absorption coefficient, we measured the time taken by the Geiger-counter to make 1000 counts for 5 lead plates with different thicknesses and without plate, then performed a linear regression of ##\ln(I(d))## (##I## being counts/time - background level) which gave us ##\mu## (see attached pics). The background level was measured with nothing inside the compartments.

    It is important to note that since we did only one measurement of ##I_0## (counts/s without plate), we could not plot ##ln(\frac{I(d)}{I_0})## because we could then not calculate the covariance in the uncertainty. We were told to do it that way and use ##I_0## as a parameter for the fit.

    The thicknesses of the plates were 1, 1.7, 3.3, 6.8 and 11.2, all given in mm.

    2. Relevant equations

    The relation between all those values is: ##I(d) = I_0 \cdot exp(-\mu \cdot d)##.

    I will also give here our results: the absorption coefficient was found to be ##\mu = (0.097 \pm 0.004) mm^{-1}##. As a consequence, we found that the half-thickness was ##d_{1/2} = (7.1 \pm 0.3)##mm and that the mass attenuation coefficient was ##(0.086 \pm 0.004)##. After reading the graphs we were given (see attached pics), we determined the photon energy of ##^{137}##Cs to be ##E = (0.8 \pm 0.1)##MeV.

    Also important about discussing the fit (see below), ##I_{0,parameter} = (1.54 \pm 0.03)## and ##I_{0,measured} = (1.65 \pm 0.06)##.

    3. The attempt at a solution

    The problem is that the reference value I found for the energy is ##E_{ref} = 661.64##KeV, and I must now explain this non-negligible difference in results. I am in first year, that is I never had any lecture about quantum physics so I am a bit clueless. I figured that the error is most probably located in the fit, and if ##\mu## would be bigger then ##E## would tend towards ##E_{ref}##. Through research and thinking, I have some suggestions but I can't be sure whether they are right or wrong:

    - if we did more measurements of ##I_0##, we would be able to calculate the covariance and perform the linear fit with ##ln(\frac{I(d)}{I_0})##. That would most probably result in a bigger ##\mu## since ##I_{0,measured} > I_{0,parameter}##;

    - a Geiger-counter is only 1% efficient at detecting gamma-rays. Though it is a limitation of the measuring instrument, I am not sure whether this has any statistical impact on our measurements;

    - the Geiger-counter creates a "dead time" of ##\tau = 100 \mu s## after each detection, which could prevent another detection to be made during this time. There is formula to calculate its influence: ##n = \frac{n_{measured}}{1 - n_{measured} \cdot \tau} = 1111## counts. I find this very big, does that make sense?

    - 3 other teams performed the same experiment near to us. Could that affect our measurement of the background level?

    - the thickness of the lead plates were given without uncertainties, and they could have been inhomogeneous. Our probe was located just under the plate. I've read that this could create a scattering effect increasing the number of counts. Could that be the case? All of the areas were the same, and we didnt pile up the plates (there were 5 different ones);

    - there are two types of interaction happening inside the tube: photo-absorption and Compton scattering. If the collision between a photon and an electron is of the type Compton scattering, could it be that the Geiger-counter detects two gamma photons instead of one or is the dead time large enough to prevent it?

    As you can see, this is for me very confusing. Do you remarks about what I just wrote or clues about other sources of uncertainty? I hope I didn't forget something important.


    Thank you very much in advance.


    Julien.
     

    Attached Files:

  2. jcsd
  3. Jun 28, 2016 #2
    Okay I just found out that Compton scattering can only happen when ##E > 1.02##MeV. One thing cleared up!
     
  4. Jun 28, 2016 #3

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    Where is the non-negligible difference? One more digit for the 0.8 +- 0.1 MeV value would help, but 140 keV difference with 100 keV uncertainty is perfectly fine.

    I don't understand the problem you see with the I0 measurement. Just put it in the fit as value for a plate with 0 thickness?
    Not on a level that would be relevant here. The actual efficiency is much worse as your Geiger counter covers a small region of the solid angle around the probes, but that doesn't matter either.

    Deadtime: What were your actual count rates? The dead-time becomes relevant if the average time between detections is comparable to it. Here: if your detection rate is of the order of 500 Hz or more. If you had such a high rate, why did you just take 1000 detections per sample?

    Good idea! You can estimate the distance to their source, and the maximal possible impact on your measurement based on that.
    I would not expect a relevant effect from that.
    How long do you expect a photon and a high-energetic electron to be in the Geiger counter?
    No, that is the limit for pair production, Compton scattering can happen earlier.
     
  5. Jun 28, 2016 #4
    Hi @mfb and first thank you so much for your very complete answer!

    Really? When a value lays outside the error boundaries, we usually have to find a pretty good explanation for it.

    The problem does not relate to ##I_0## itself but in the calculation of the uncertainty. If we plot ##\ln \big( \frac{I(d)}{I_0} \big)##, we have to take into account that ##I(d)## and ##I_0## are correlated. Since we have only one measurement of ##I_0##, we cannot determine the covariance (we only know it lays between 0 and 1. Actually we were told pretty clearly to do it that way and have ##I_0## as a parameter. Why we measured it anyway is a mystery to me, maybe so that we can discuss the poor methodology of the experiment.

    Okay, thanks.

    We took 1000 detections per sample because those were the guidelines of the experiment. I believe the reason is that we did other things during this experiment, like a probability distribution for cobalt-60, and that measurement alone took over a hour to process. About the count rates, we found ##I(d_1) = 1.44 s^{-1}##, ##I(d_2) = 1.28 s^{-1}##, ##I(d_3) = 1.08 s^{-1}##, ## I(d_4) = 0.82 s^{-1}## and ##I(d_5) = 0.51 s^{-1}##. Note that I subtracted the background level before giving you those values.

    Nice!

    Okay.

    I wouldn't know how to estimate that, but probably not very long. Like much shorter than ##100 \mu s##.

    Oh really? Still, would it have an impact since the dead time is not that short?
     
  6. Jun 28, 2016 #5

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    Uncertainties are usually given with their standard deviation or similar metrics - typically (~2/3 probability) the actual value should be within that range if the estimate is correct, but it can also be a bit outside. If the deviation exceeds two standard deviations, things get more interesting. Absolute error bounds ("there is no way it can be more than that") are rare because you are never absolutely sure about any measurement.
    Where do you expect a correlation? In particular, which correlation do you see that would not be present between the different I(d) values?

    You can have I0 as free parameter in the fit and still plug in your measurement of I(0).

    At a rate of ~1/s, the dead time is completely negligible. I guess you didn't subtract 500/s background from 501/s signal...
    Right. Just consider how long an electron at relativistic speeds (0.05 c?) needs to travel through the size of your Geiger counter.
    No.
     
  7. Jun 28, 2016 #6
    Okay good to know.

    You're right, I didn't get that until now. I'm gonna add ##I_0## as ##I(0)## in the fit. About the correlation, I would think that if the components of the fit are parameters, then they are not correlated. I might be wrong though, but the guy wrote ##ln \big( \frac{I(d)}{I_0} \big)## barred on the board so I don't really want to challenge him. :DD

    I don't get that. The way I understood the formula was ##n_{corrected} = \frac{n_{measured}}{1 - n_{measured} \cdot \tau} = \frac{1000}{1 - 1000 \cdot 100 \cdot 10^{-6}} = 1111## counts. What am I doing wrong here?

    Okay good.
     
    Last edited: Jun 28, 2016
  8. Jun 28, 2016 #7
    And thanks a lot again for your answer @mfb !
     
  9. Jun 28, 2016 #8

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    It is possible to use those ratios, but (a) the values are not correlated in a relevant way and (b) not dividing the counts is a more natural way to fit the data. Diving all by a constant doesn't change the fit result in any way.

    I don't know where you get that formula from, but it is wrong. The units don't even match. If n gets interpreted as rate instead of count number, it works.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Discussing an experiment (radioactivity, Geiger-counter)
  1. Geiger counter (Replies: 6)

  2. Geiger counter (Replies: 2)

  3. Geiger-Muller Counter? (Replies: 1)

Loading...