# Poisson Statistics + Photon Detections

## Main Question or Discussion Point

Hi there,

Having done a Google, I wasn't able to find much information relating specifically to Poisson statistics and photon detections.

I was wondering why photon detection experiments are calculated using Poisson statistics?
(So for example, would Poisson distribution calculations be applied to Bell inequality tests etc.?)

What assumptions, if any, are made in relation to the calculations, for when you calculate the Poisson confidence intervals for photon experiments?

Any assistance would be much appreciated,
Stevie

Related Quantum Physics News on Phys.org
Simon Bridge
Homework Helper
I was wondering why photon detection experiments are calculated using Poisson statistics?
Because they involve counting things.

A quick google for "poisson distribution" provided the following in the top 10 hits:
Particle Counting Statistics: PHYS 331: Junior Physics Laboratory I (Rice University TX).
The Poisson Distribution: Jn. Statistics Lessons (UMass. Amhurst)

From the latter:
The Poisson distribution applies when:
(1) the event is something that can be counted in whole numbers;
(2) occurrences are independent, so that one occurrence neither diminishes nor increases the chance of another;
(3) the average frequency of occurrence for the time period in question is known; and
(4) it is possible to count how many events have occurred, but meaningless to ask how many such events have not occurred.

kith
Poisson statistics are related to the so-called coherent states of the electromagnetic field. These states are very classical in the sense that the uncertainty between amplitude and phase is minimal and independent of their value. A laser field is typically in a coherent state. States with a small definite particle number are very different from coherent states.

From the wiki article:
"Physically, this formula means that a coherent state remains unchanged by the detection (or annihilation) of field excitation or, say, a particle. The eigenstate of the annihilation operator has a Poissonian number distribution (as shown below). A Poisson distribution is a necessary and sufficient condition that all detections are statistically independent. Compare this to a single-particle state: once one particle is detected, there is zero probability of detecting another."

Last edited:

I saw that an assumption for this kind of statistic is that two simultaneous events do not occur at the same time.

I was wondering how we can use Poisson Statistics if we're unsure of the "the average frequency of occurrence for the time period in question", such as if photons go astray and thus undetected?

Simon Bridge
Homework Helper
I saw that an assumption for this kind of statistic is that two simultaneous events do not occur at the same time.
Two simultaneous events that occur at the same time would be a total of four events, which occur simultaneously ;)
You mean that events must happen one after the other?
... where did you see this?

I was wondering how we can use Poisson Statistics if we're unsure of the "the average frequency of occurrence for the time period in question", such as if photons go astray and thus undetected?
Photons that going undetected are factored in from when you calibrated the detector. It's a reason for doing control runs.

The design of the experiment takes into account previous experiments done on the source.

Two simultaneous events that occur at the same time would be a total of four events, which occur simultaneously ;)
You mean that events must happen one after the other?
... where did you see this?
I can't seem to locate the page in which I saw this. It wasn't probably a credible page, as you've listed for me the four assumptions made.

Photons that going undetected are factored in from when you calibrated the detector. It's a reason for doing control runs.

The design of the experiment takes into account previous experiments done on the source.
I assume that any photons not detected won't make it into the final results, as we would be predetermining what result we would get without actually detecting it.

However, from the full results we get (e.g. four-fold coincidence counts rather than 3 of 4 photons detected), I gather we can use Poisson statistics to calculate the confidence interval in which the population mean of the sample may be within? Does that come with any assumptions?

Simon Bridge
Homework Helper
I assume that any photons not detected won't make it into the final results, as we would be predetermining what result we would get without actually detecting it.
Depends what you are measuring. It is usual to correct for detector efficiency for instance, as well as for "dead time".
This is why you do control runs using a source that has well-known statistics - to find out how the detector statistics relate to the source statistics.

Depends what you are measuring. It is usual to correct for detector efficiency for instance, as well as for "dead time".
This is why you do control runs using a source that has well-known statistics - to find out how the detector statistics relate to the source statistics.
Would doing that help in calculating the systematic uncertainty of the final results?

I should have also asked: is it acceptable to publish experimental results that are at one standard deviation, for conformation of predictions?

I'm not entirely sure how the calculation is done, except that the standard deviation is the variance of the data around the sample mean. But if you doubled the standard deviation do you risk including results not otherwise obtained in the experiment?

Simon Bridge
Homework Helper
is it acceptable to publish experimental results that are at one standard deviation, for conformation of predictions?
Why? Have you been rejected for publication with a note about your error analysis?
What is acceptable for publication is whatever the publisher says is acceptable.

If your predicted result is within 1sd of the mean of your experimental results, then the experiment results would support the model the prediction came from - or any other model that gave similar numbers.

So that would normally be quite acceptable.
It is not normally acceptable to reject the model based on a prediction >1sd from the experimental mean.
Science focuses on rejecting models, not accepting them.

I'm not entirely sure how the calculation is done, except that the standard deviation is the variance of the data around the sample mean. But if you doubled the standard deviation do you risk including results not otherwise obtained in the experiment?
The standard deviation is the square-root of the varience - is called the "statistical error".
In Poisson statistics, for large numbers of counts, the error on the number of counts in a time interval is the square-root of the number you got. For small counts, you have to go to the equations.

As for the rest: google for "hypothesis testing".

The hypothesis being tested is, loosely, that the physical model which gave rise to the prediction "works". The null-hypothesis is that it doesn't work.
Rejecting the hypothesis is the same as rejecting the model.

If the model predicts a result that turns out to be >2sd from the experimental mean, then you can reject the model with 95% confidence... if >3sd you reject the model with 99% confidence. The test says nothing about how confident you can be in the model if you don't reject it - since you don't know how many other models these results could support and, like you said, you risk erroneously accepting unrelated results as supporting the model.

Thus a single experiment cannot confirm a theory - only reject it.

Would doing that help in calculating the systematic uncertainty of the final results?
Doing what? A control run?

Control runs are essential to working out the systematic uncertainty in final results.
Always do a control run. Your measurements are invalid without one.

If you don't compare your yardstick with a known reliable yard, why should anyone believe any measurements you make with it? Especially considering the stakes if your measurements should come out different from those expected from otherwise established models?

Last edited: