A Uncertainties in Poisson processes

AI Thread Summary
The discussion revolves around characterizing optical detectors, specifically focusing on the uncertainty in photon count measurements from a single large pixel detector. The user is uncertain about how to calculate the uncertainty associated with repeated measurements of photon counts, considering two methods: standard deviation and the Poisson process approach. Responses indicate that for large counts, using the square root of the average count is appropriate, while smaller counts may require alternative statistical measures due to skewness. Additionally, the conversation highlights the importance of considering various noise sources, such as dark current and amplifier noise, in the measurement process. The user seeks clarification on the validity of these methods in the context of a shot noise-limited detector like a Single Photon Avalanche Diode.
marco1235
Good morning PF,

I'm feeling a bit doubtful about this issue. I'm working with optical detectors and I have to characterize them in terms of quantum efficiency and other similar things. Now suppose my detector is, ideally, a single large pixel, which I illuminate for a specific time. Then I store the recorded Nphotons and repeat the procedure for 10k times! At each iteration, due to the randomness of the process I can get 100 counts in the first step, 102 at the second, 95, 87, 101, 106, ... an so on.
I want to make an average of such 10k values, and that's fine. But how about the uncertainty associated with this repeated measure? I have two ways:
1) computing the standard deviation using std-like function (Matlab)
2) putting Navg as argument of the squared-root like in Poisson processes

I'm really stucked in this situation.
Hope someone could help me!
Have a nice day
 
Physics news on Phys.org
n data points from a poisson distribution are approximately normal for large n (standard deviation ##\sqrt{n}##.
If you have small numbers, then the distributon is strongly skewed and you'll need median and quartiles or some other way to account for skewdness.
So - if you have sufficiently large counts, you want option 2... though either should work.
 
Thank you so much! it helped a lot :)
 
marco1235 said:
But how about the uncertainty associated with this repeated measure?

It's not clear what you are really asking or trying to characterize- Simon Bridge's response refers only to the noise associated with incoherent photons (thermal light or 'shot noise'), but you have other noise sources: dark current, amplifier noise... Your measurement contains all of these noise sources, which are hopefully independent from each other. Hamamatsu has some very read-able references on this issue:

http://www.hamamatsu.com/jp/en/community/optical_sensors/all_sensors/guide_to_detector_selection/index.html
http://www.hamamatsu.com/jp/en/community/optical_sensors/sipm/measuring_mppc/index.html
http://www.hamamatsu.com/jp/en/community/optical_sensors/all_sensors/index.html
http://www.hamamatsu.com/resources/pdf/ssd/e05_handbook_image_sensors.pdf
 
Last edited by a moderator:
Dear Andy,

thank you for your answer. You're right, probably I went a bit faster.. in case of an ideal detector (so only shot noise-limited) and photons coming out from a fluorescent specimen, is Simon's reply still valid? In real life my detector is a camera based on Single Photon Avalanche Diodes, and the designers told me that the sensor is only shot noise-limited, since SPADs are able to produce mA range currents upon photo-detection, and thus there's not the need of gain steps like in other optical sensor..
 
So I know that electrons are fundamental, there's no 'material' that makes them up, it's like talking about a colour itself rather than a car or a flower. Now protons and neutrons and quarks and whatever other stuff is there fundamentally, I want someone to kind of teach me these, I have a lot of questions that books might not give the answer in the way I understand. Thanks
Back
Top