# Calculating the uncertainty in a Geiger counter

DanishForestCat
Homework Statement:
Hello!

I have a straightforward (I think...) uncertainties problem, that I still can't do. I need to calculate the uncertainty (in seconds) in the deadtime of a Geiger tube.
Relevant Equations:
The formula for deadtime is (approximately) T = (A+B-C)/(2*A*B). A, B and C are all count rates (in counts per second).

A = 209
B = 273
C = 456

I believe that the absolute uncertainty in A is sqrt(A) (I also have a small absolute uncertainty from using a stopwatch).
Does the absolute uncertainty in A+B-C equal √A+√B+√C? If so, this is larger than the value of A+B-C. Surely that can't be right?

For this investigation, I have been working from a lab script from my local uni physics department. This suggested that calculating this actually required a method using partial derivatives. Is this right? (if so, the calculation would probably be beyond the scope of my course, though if I could work it out, I would obviously use it)

I am really struggling with this, and would be massively grateful for any help.

Many thanks!

## Answers and Replies

Homework Helper

I believe that the absolute uncertainty in A is sqrt(A) (I also have a small absolute uncertainty from ..
If you can ignore the stopwatch and you counted for 1 sec , your 'belief' is reasonable.
If you counted for 100 sec and divided by 100, much less so !

Re: √A+√B+√C
Absolutely not ! You should read up on http://ipl.physics.harvard.edu/wp-uploads/2013/03/PS3_Error_Propagation_sp13.pdf (which google for a style that suits you). Go on until you understand the general formula$$\Biggl ( \Delta f(a,b,c)\Biggr )^2 = \left ( {df\over da}\Delta a\right )^2 + \left ( {df\over db}\Delta b\right )^2 + \left ( {df\over dc}\Delta c\right )^2$$ which, when applied to A+B-C yields 31 for me; a little less than your 52, but still over 100%.

Bottom line: subtracting two almost equal numbers can give big errors. Simple example: ##(1000 \pm 10) - (990 \pm 10) = 10 \pm 14## if the errors are uncorrelated.

Last edited:
DanishForestCat
DanishForestCat
Hi! Thank you very much for your answer - it is very useful. I've never been taught the formula you give; instead, my textbooks all say that the absolute uncertainty when adding or subtracting is simply the individual uncertainties added together.

I appreciate that this may be an oversimplification - but am I able to do these calculations on the basis of this formula, instead of the quadrature rule?

Also: in any case, how should I handle such a large uncertainty? In practise, the deadtime of the Geiger counter has minimal impact (particularly as I was working with fairly low activity sources). But on the basis of these uncertainty calculations, I worry I'm now going to have a massive uncertainty in my final result.

Homework Helper
New avatar !
I've never been taught the formula you give
You might recognize $$df= {\partial f\over \partial a}\;da + {\partial f\over \partial b}\;db + \text {etc.}$$ from calculus.
[edited -- for some reason rendering was badly distorted]
You can't do linear addition of error in numerator and error in denominator for a division like (A + B - C)/(2AB) because numerator and denominator are correlated. (Just check for B=C=0: you get A/(2A) =0.5 with zero uncertainty -- oops, BAD example wrongly treated -- look at (A+B)/(A+B) instead ).

The quadature comes from probability considerations: in 1D a gaussian ⇒⇒ in 2D a gaussian hat (see e.g. wikipedia). Linear addition is overly pessimistic.

am I able to do these calculations on the basis of this formula
Nobody to stop you.

In practise, the deadtime of the Geiger counter has minimal impact (particularly as I was working with fairly low activity sources).
That's good to hear.

But on the basis of these uncertainty calculations, I worry I'm now going to have a massive uncertainty in my final result.
Is a bit in contradiction with the preceding: after all, your deadtime is only 0.2±0.30.2±0.3 ms (or, if you want 0.2±0.50.2±0.5 ms). Of course for deadtimes we do have a requirement that they are positive.

This means a correction of e.g. 5% to A (from 209±15209±15 to 219±219±... ).
The ... is > 15 . Worst case deadtime might be 0.4 ms leading to 231 -- a difference of 12 counts/sec, and almost doubling the error when doing linear addition.

Unless:

You did not react to my
If you can ignore the stopwatch and you counted for 1 sec , your 'belief' is reasonable.
If you counted for 100 sec and divided by 100, much less so !
Did you indeed count for 1s at every measurement ?

 found 100 to 500 μs deadtime values here, also your formula

Last edited:
Homework Helper
Gold Member
2022 Award
A, B and C are all count rates (in counts per second).

A = 209
B = 273
C = 456
As BvU indicates, you need to work with the actual counts, not the count rates. That said, these numbers do look like counts, right?

DanishForestCat
BvU and haruspex: I am so grateful for your help - thank you! Incidentally, the link you give is the same document I've been using as a guide for much of this.

OK, I understand why linear addition won't work. I'm afraid that the Gaussian functions are a little beyond me, at the moment.

You've also really helped me to see what I can expect the size of my uncertainties to be.

The figures I gave above are in counts per second, but I counted over a longer period. Taking account of the uncertainties in both the count and the time period, the absolute uncertainty in 209 is 5; in 273, 7. This is on the basis that the uncertainty in the time is 1 second, and the uncertainty in the count (C) is √C.

I appreciate that it is a less elegant solution, but would it work if I were to find the maximum and minimum values of the deadtime (within the different absolute uncertainties), and then take the absolute uncertainty in the deadtime as half of the difference?

Homework Helper
Gold Member
2022 Award
Taking account of the uncertainties in both the count and the time period, the absolute uncertainty in 209 is 5; in 273, 7. This is on the basis that the uncertainty in the time is 1 second, and the uncertainty in the count (C) is √C.
Working backwards from that, I deduce the time period was about 46 seconds. If not, please explain how you arrive at this.

DanishForestCat
Working backwards from that, I deduce the time period was about 46 seconds. If not, please explain how you arrive at this.

Yes, approximately.

EDIT: and although right at this moment I do not have the data to hand, the total counts were obviously between approx. 9600-12600.

Homework Helper
Gold Member
2022 Award
would it work if I were to find the maximum and minimum values of the deadtime (within the different absolute uncertainties), and then take the absolute uncertainty in the deadtime as half of the difference?
It still isn't completely kosher, but it would be reasonable to write ##T=\frac 1{2A}+\frac 1{2B}-\frac C{2AB}## and treat those three terms as independent.

Homework Helper
the uncertainty in the time is 1 second
I, too, have reverse-engineered a bit (because I am intrigued by this thread) and found that the uncertainty in time is a major contribution to the error in the final result. So: more questions on this.

How did you determine the 1 s ? You have to start and stop the counter and a stopwatch, so a reaction time comes in twice.

So far I have (assuming a 45 s counting time):

 delta from delta from added Counts root Counts/s​ root 1 s in quadr 9405​ 97.0​ 209​ 2.16​ 4.64​ 5.12​ 12285​ 110.8​ 273​ 2.46​ 6.07​ 6.55​ 20520​ 143.2​ 456​ 3.18​ 10.13​ 10.62​

deadtime 230 ##\pm## 80 ##\mu##s
(80 from the general expression in post #2).

So correcting e.g. A for deadtime changes 209 to 219 and 5 to 7 (again, using addition in quadrature).

Looks as if an effort to improve timimng accuracy is worthwhile ...

By the way, I guess you are in high school and if that is so, hats off for this valiant analysis !

DanishForestCat
I would like to make a comment on the OP equation for dead time estimate since others may use this thread for their own purpose. Although the OP notes that the equation for dead time used is an approximation it is in fact an approximation of an approximation.

GM counters can have significant dead times. The count rate which is observed say A is related to the real count rate N and the dead time t by the relation

N = A/(1 -At) assuming t is independent of count rate which may not be true for high count rates.

Using the two source method of determining dead time with A the count rate for one source and B the count rate for another source, C the count rate for both together, and b.g. the background count rate, the relationship of count rates and dead time is

A/(1-At) +B/(1-Bt) = C/(1-Ct) + b.g./(1-bg⋅t) again assuming no significant dependence of t on count rate.

The b.g. count rate can often be dropped but for accurate determinations of t one should verify this.

If you take A/(1-At) ≅ A +A2t and similarly for B and C which is often done you can simplify the expression.

t= (A+B-C)/(C2 - A2 - B2 ) as the first approximation

but this is valid only if At and Bt and Ct are << 1 depending on accuracy desired.

To get the OP expression t =(A+B-C)/2AB you must further assume A+B =C for the denominator which can produce an substantial additional error if this approximation is not verified.

So care must be exercised in using this relationship.

Comparing the OP equation to the first approximation using the numbers generously provided in post #10 one obtains

for the OP relation we have 230 μsec vs 290 μsec using the first approximation.

Checking whether or not the first approximation is valid for this case for A =209 , B=273 , C=456 and t = 290μsec.

A⋅t = 0.06, B⋅t=0.08, C⋅t = .13 which look OK

DanishForestCat and BvU
DanishForestCat
I, too, have reverse-engineered a bit (because I am intrigued by this thread) and found that the uncertainty in time is a major contribution to the error in the final result. So: more questions on this.

How did you determine the 1 s ? You have to start and stop the counter and a stopwatch, so a reaction time comes in twice.

So far I have (assuming a 45 s counting time):

 delta from delta from added Counts root Counts/s​ root 1 s in quadr 9405​ 97.0​ ​209​ 2.16​ 4.64​ 5.12​ 12285​ 110.8​ ​273​ 2.46​ 6.07​ 6.55​ 20520​ 143.2​ ​456​ 3.18​ 10.13​ 10.62​
deadtime 230 ##\pm## 80 ##\mu##s

(80 from the general expression in post #2).

So correcting e.g. A for deadtime changes 209 to 219 and 5 to 7 (again, using addition in quadrature).

Looks as if an effort to improve timimng accuracy is worthwhile ...

By the way, I guess you are in high school and if that is so, hats off for this valiant analysis !

For a little context: yes, I'm a high school student, doing IB Diploma Standard Level Physics. This is for my main piece of coursework: an investigation into how beta radiation from a Sr-90 source attenuates in paper. In the end, we are marked on our overall process, and an inaccuracy in the final uncertainty will not significantly affect how my work is assessed, provided that I evaluate and meaningfully reflect on where the inaccuracy comes from and how it could be avoided in future. With that said, I find this problem fascinating, and want to get it right!

The 1 second uncertainty is because (on advice) I rounded my times to the nearest second - and my teacher told me that my uncertainty would be ##\pm## the smallest increment in my metre. I am beginning to appreciate that the significance of the uncertainty in the time may mean this is an important imprecision, but it's the data I've got.

Please could you explain how you reached ##\pm## 80 ##\mu##s? I tried to find the uncertainty using the expression in post #9, and ended up with 1700 ##\mu##s... I'm sure I've made an error (at least, I hope I have!); I just haven't found it yet. I'd really like to be able to use the expression given in post #2, but I don't understand the maths at the moment (I'm doing a standard level high school maths course, and my experience with calculus is limited).

I have a further issue. I have what appears to be a good set of data, that has an almost perfect exponential regression (or linear regression, when I take the natural logarithm of each data point to find the attenuation coefficient).

For each point, I've measured how the (linear) thickness of a paper barrier affects how long it takes the Geiger counter to reach a pre-determined number of counts. I recorded the time taken, and the actual number of counts reached in that time: if I'm aiming to reach 10,000 counts, I might overshoot slightly and reach 10,070. From these two numbers, I then calculate the counts per second.

I then have two corrections to apply: deadtime, and background radiation (itself corrected for deadtime). The correction formula for deadtime is n/1-nT, where n is the uncorrected count rate and T is the deadtime. Given that n appears twice (hence, the variables are correlated), how can I find the uncertainty in this? I'm imagining that I'd use the expression in post #2, but as I said, I'm struggling to understand how to use this expression.

Homework Helper
Gold Member
2022 Award
The correction formula for deadtime is n/1-nT, where n is the uncorrected count rate and T is the deadtime. Given that n appears twice (hence, the variables are correlated), how can I find the uncertainty in this?
If ##p=\frac x{1-xy}## where x and y have errors ##\Delta x##, ##\Delta y## we can differentiate:
##\Delta p=\frac{\partial \frac x{1-xy}}{\partial x}\Delta x+\frac{\partial \frac x{1-xy}}{\partial y}\Delta y##
##=(\frac 1{1-xy}+\frac{xy}{(1-xy)^2})\Delta x+\frac{x^2}{(1-xy)^2}\Delta y##
##=\frac{\Delta x+x^2\Delta y}{(1-xy)^2}##
I think I got that right.

The next step is to allow that we do not know what the errors are only that they are independent and with a presumed variance. This involves summing the squares as in post #2.

This leads to ##\sigma^2p=\frac{\sigma ^2 x+x^4 \sigma^2 y}{(1-xy)^4}##

Homework Helper
explain how you reached ±80 μs
Oh boy, I need to explain a contorted piece of Excel !
I follow post #2, but since I'm really bad at doing symbolic algebraic manipulation with Excel, it's rather verbose:

 counts/s​ sigma​ a​ 209​ 5.12​ b​ 273​ 6.55​ c​ 456​ 10.62​

and define z = a + b - c , g = 2 a b . so that f = z/g. Substitute values 26/114000 = 228 μs

We need ##{\partial f\over \partial a} = {\partial z\over \partial a} {1\over g }- {\partial g\over \partial a} {f\over g^2} \quad## idem b and c.
##{\partial z\over \partial a} = \ \ 1\quad ## idem ##\ b\rightarrow \ \ 1\quad## and ##\ c \rightarrow -1##
##{\partial g\over \partial a} =\, 2b\quad## idem ##\ b\rightarrow 2a \,\quad## and ##\ c \rightarrow 0 ##

Note the minus sign ! Putting things together:

##{\partial f\over \partial a} = {1\over g }- {2b\over g^2}\qquad { 8.76 \ {\scriptstyle 10^{-6}} -1.09\ {\scriptstyle 10^{-6}} = 7.67\ {\scriptstyle 10^{-6}}}\\​
{\partial f\over \partial b} = {1\over g }- {2a\over g^2} \qquad { 8.76\ {\scriptstyle 10^{-6}} -8.34\ {\scriptstyle 10^{-7}} = 7.92\ {\scriptstyle 10^{-6}}} \\​
{\partial f\over \partial c} = -{1\over g }\qquad \quad\ \ { 8.76\ {\scriptstyle 10^{-6}}} ##​
Multiply with ##da## etc, square, add up, take root (see post #2) yields ## 72 \ \mu##s.

( when I said 80 I had omitted the minus signs)

Note that this is only the uncertainty in deadtime. If push comes to shove, you also should carry this through the calculation of the corrected counting rates, and then through to the attenuation from the paper

Phew . I can imagine this is a bit beyond what your teacher expects ...

## \ ##

There is a way to determine the reaction time so that you can correct the actual counting time. Measure the counts for some extended time say 5 minutes call it C5. Then make five measurements at one minute and add them and call the sum C1. The reaction time is then given by

t = T(C1 - C5)/(5⋅C5 - C1) where T is the nominal assumed time. So the actual time is T + t (see below for derivation)

if C1>C5 then t >0 and your actual time is longer than assumed.

As for the uncertainty in the time you could try and measure the variation in count rate say for five measurements and compare this variation to the expected statistical variation thus you can write

σ2nom + σ2time = σ2measured

where the σnomis √counts and the σmeasured is the standard deviation of the five measurements. You might need a lot of counts to see an effect of this variation.
_______________________________________
Derivation:

1 Counts in time T = C = Ractual(T+t) where Ractual is the actual count rate.

For counts measured in time T/n = CT/n= Ractual(T/n +t) thus

2 for the total of n measurement for time T/n = Cn = ΣCT/n = Ractual(T + nt)

Take equations 1 and 2 solve for R actual and set them equal and then solve for t.

Homework Helper
Gold Member
2022 Award
Not sure why, but some of the LaTeX didn't work for me. Taking out the double '\' it gives
Oh boy, I need to explain a contorted piece of Excel !
I follow post #2, but since I'm really bad at doing symbolic algebraic manipulation with Excel, it's rather verbose:

 counts/s​ sigma​ a​ 209​ 5.12​ b​ 273​ 6.55​ c​ 456​ 10.62​
and define z = a + b - c , g = 2 a b . so that f = z/g. Substitute values 26/114000 = 228 μs

We need ##{\partial f\over \partial a} = {\partial z\over \partial a} {1\over g }- {\partial g\over \partial a} {f\over g^2} \quad## idem b and c.
##{\partial z\over \partial a} = \ \ 1\quad ## idem ##\ b\rightarrow \ \ 1\quad## and ##\ c \rightarrow -1##
##{\partial g\over \partial a} =\, 2b\quad## idem ##\ b\rightarrow 2a \,\quad## and ##\ c \rightarrow 0 ##

Note the minus sign ! Putting things together:

##{\partial f\over \partial a} = {1\over g }- {2b\over g^2}\qquad { 8.76 \ {\scriptstyle 10^{-6}} -1.09\ {\scriptstyle 10^{-6}} = 7.67\ {\scriptstyle 10^{-6}}}##​
##{\partial f\over \partial b} = {1\over g }- {2a\over g^2} \qquad { 8.76\ {\scriptstyle 10^{-6}} -8.34\ {\scriptstyle 10^{-7}} = 7.92\ {\scriptstyle 10^{-6}}} ##​
##{\partial f\over \partial c} = -{1\over g }\qquad \quad\ \ { 8.76\ {\scriptstyle 10^{-6}}} ##​
Multiply with ##da## etc, square, add up, take root (see post #2) yields ## 72 \ \mu##s.

( when I said 80 I had omitted the minus signs)

Note that this is only the uncertainty in deadtime. If push comes to shove, you also should carry this through the calculation of the corrected counting rates, and then through to the attenuation from the paper

Phew . I can imagine this is a bit beyond what your teacher expects ...

## \ ##