Counts per Frame in CCD

new6ton

Gold Member
I don't know if this is the appropriate forum for it. But I'd like to understand the concept of counts per frame in CCD in visible or IR spectrometry. If the exposure is say 200 ms and there are 12 frames. What is the counts per frame? Do you multiply 200ms by 12 frames to come up with 2400 counts per frame? But the 200 ms is the exposure. How and why is it converted into counts?

Related Electrical Engineering News on Phys.org

phyzguy

I don't know if this is the appropriate forum for it. But I'd like to understand the concept of counts per frame in CCD in visible or IR spectrometry. If the exposure is say 200 ms and there are 12 frames. What is the counts per frame? Do you multiply 200ms by 12 frames to come up with 2400 counts per frame? But the 200 ms is the exposure. How and why is it converted into counts?
Your question isn't clear. The number of counts isn't determined just by the exposure time, but by the measured signal. When you read out the CCD, each pixel has a signal level which is usually calibrated in electrons. Since each photon is created by an incoming photon, knowing the signal level in electrons talls you how many photons landed in each pixel. Then, based on the exposure time, you can determine the incoming flux in photons/second/pixel. Is this what you're trying to do?

new6ton

Gold Member
Your question isn't clear. The number of counts isn't determined just by the exposure time, but by the measured signal. When you read out the CCD, each pixel has a signal level which is usually calibrated in electrons. Since each photon is created by an incoming photon, knowing the signal level in electrons talls you how many photons landed in each pixel. Then, based on the exposure time, you can determine the incoming flux in photons/second/pixel. Is this what you're trying to do?
Ok. In astronomy work with dark sky or when detecting dark matter, the signal intensity or measured signal is the matter. Does it mean if there is only 3 dark matter particles per minute or hour. There is only 3 counts per frame? Irregardless of how long is the exposure (minutes or decades)? This is just an example to emphasize the idea.

phyzguy

Ok. In astronomy work with dark sky or when detecting dark matter, the signal intensity or measured signal is the matter. Does it mean if there is only 3 dark matter particles per minute or hour. There is only 3 counts per frame? Irregardless of how long is the exposure (minutes or decades)? This is just an example to emphasize the idea.
I'm still not understanding your question. In an astronomical image, some pixels will have a lot of light fall on them and be very bright. These will have a large signal count. Other pixels will be dark and have very few counts. The number of counts is a measure of how much light fell on the pixel. if you expose longer, you will get more counts.

new6ton

Gold Member
I'm still not understanding your question. In an astronomical image, some pixels will have a lot of light fall on them and be very bright. These will have a large signal count. Other pixels will be dark and have very few counts. The number of counts is a measure of how much light fell on the pixel. if you expose longer, you will get more counts.
Ok. No problem. I was initially confusing the exposure and the frames.

But why is that the more frames, the better is the image? The exposure is based on how long the CCD is open. But how about the number of frames it is taking? Why does it have effect on noise and even signal?

phyzguy

Ok. No problem. I was initially confusing the exposure and the frames.

But why is that the more frames, the better is the image? The exposure is based on how long the CCD is open. But how about the number of frames it is taking? Why does it have effect on noise and even signal?
OK, I think I can answer this one. You're taking the images through the atmosphere, and the atmosphere is turbulent. So the image dances around. If you take a long exposure, all of this turbulence creates a blurred-out image, which is an average of the "dancing -around" image. If you take many shorter exposures, each one is sharper, but each center point is in a slightly different place. So you use software to align all of these images and compensate for the "dancing-around". This is called stacking, and creates a much sharper image.

new6ton

Gold Member
OK, I think I can answer this one. You're taking the images through the atmosphere, and the atmosphere is turbulent. So the image dances around. If you take a long exposure, all of this turbulence creates a blurred-out image, which is an average of the "dancing -around" image. If you take many shorter exposures, each one is sharper, but each center point is in a slightly different place. So you use software to align all of these images and compensate for the "dancing-around". This is called stacking, and creates a much sharper image.
I guess stacking is also used in the spectral capture in the CCDs of spectrometers (infrared and raman)?

phyzguy

I guess stacking is also used in the spectral capture in the CCDs of spectrometers (infrared and raman)?
Not that I'm aware of. Do you have a reference?

new6ton

Gold Member
Not that I'm aware of. Do you have a reference?
It's logic. Spectra from Raman have very intense Rayleigh light that the system is trying to filter. So the image is noisy. It can benefit from stacking, right? Why don't you think so (perhaps it's not present in absorption spectrometers?).

Is there a principle in astronomy stacking where the magnitude of noise decreased as the square root of a number of frames?

Borek

Mentor
Is there a principle in astronomy stacking where the magnitude of noise decreased as the square root of a number of frames?
Sounds like a principle in statistics, not astronomy.

new6ton

Gold Member
The spectrometer supplier told me this:

"The noise reduces as a square root of number of frames. Say, when measuring the water you get the magnitude of noise 1000 count per frame, and if you set 100 frames (square root of 100 is equal to 10) than you'll get the magnitude of noise equal 1000/10 = 100, i.e. just 100 counts per frame.
https://4nsi.com/blog/2015/10/21/optimizing-image-signal-to-noise-ratio-using-frame-averaging/
This is a general fact for Posson noise https://en.wikipedia.org/wiki/Shot_noise "

He didn't describe it further. Say. Do you use frame averaging in digital camera or smartphone too? What else use frame averaging? Is this not related to stacking images in astronomy? Please share your experiences about them.

hutchphd

Whether you increase the number of frames by 100 or the exposure time by 100 the signal to noise gets better by 10:

1. if 100 frames the signal stays same but the averaged noise reduced by 10
2. if exposure x100 then signal x100 but noise x10
This assumes you are not saturating any detectors.
No fundamental difference otherwise

sophiecentaur

Gold Member
Stacking is more use than a much longer exposure time. The explanation, as I understand it, is to do with the range of sensor values - which limits the contrast ratio for any single exposure. Say the sensor has 8 bit resolution and you stack many frames, the effective number of bits per pixel will increase due to the arithmetic. Faint objects will appear 'out of nowhere' because they (coherent signals) become significant compared with the average of random noise but, at the same time, the brightest objects are not burned out..
I am very much a beginner with this business but I have managed to drag the milky way up out of a light polluted sky by using just ten stacked exposures on my SLR. That principle works in many fields of measurement and is basically a way of reducing your system bandwidth.

hutchphd

Yes I agree........if the long exposure requires you to change digital increment then the multiple exposures will provide "dither" and produce finer differentiation of the signal. This is a digitizing issue but not a fundamental "signal to noise" improvement (in my lexicon!).

And I recommend free from NIH

sophiecentaur

Gold Member
not a fundamental "signal to noise" improvement
The noise is suppressed by an averaging process. That is time domain filtering which is a common technique to increases the signal to noise ratio; The spectrum of the noise is changed by the non linear process of stacking on the sequence of single pixel values. The term "dithering" is a modern one and I'm not sure that it does anything but to suppress the visibility of low frequency noise by modulating it up to higher frequencies. But some of these processes are hard to describe in conventional analogue terms.

hutchphd

1. Suppose the pixel digitizes on a 4 bit scale 0-15. Let the "true" data value be 10.4.
2. In the absence of noise, the measurement will be 10 and repeated frames will all yield 10 for that pixel
3. If random noise of say rms=1 is present then that pixel will sometimes be 10 and sometimes be 11. The average of many frames will be close to 10.4
That is what I was talking about for "dither". It is quite distinct from the typical √n behavior associated with gaussian noise and repeat measurement. I'm uncertain as to whether this is what you were describing for dither.

sophiecentaur

Gold Member
1. Suppose the pixel digitizes on a 4 bit scale 0-15. Let the "true" data value be 10.4.
2. In the absence of noise, the measurement will be 10 and repeated frames will all yield 10 for that pixel
3. If random noise of say rms=1 is present then that pixel will sometimes be 10 and sometimes be 11. The average of many frames will be close to 10.4
That is what I was talking about for "dither". It is quite distinct from the typical √n behavior associated with gaussian noise and repeat measurement. I'm uncertain as to whether this is what you were describing for dither.
Iirc, dither is a system for introducing low level noise to a low bit sampled signal to spread the quantising noise (which is really a form of distortion) to reduce the subjective effect of the large quantising levels. There may be another application and another use of the term 'dither'.

Your example seems to be for a high signal to noise ratio already. Stacking of images will produce higher values for consistent values (+noise) over you take a signal that's 'true' value is 1.1 with an rms noise of 1 and you stack 100 with a median rule (there are several other options in stacking programs but I haven't tried them), the signal level (level of a few pixels around a star's location) will be 100 and the nearby noisy pixels will 100* the median level which will be about 10. In broad terms that is what I have seen to work on multiple astro images. The resulting image uses more bits per sample than the original image and it's usual to modify the gamma to bring out the fainter stars and crush the brighter ones. The resulting image gives larger dots for originally brighter stars which has the effect of increasing the apparent contrast range.

I don't recall seeing dither as an option - but I may just have missed it.

We could be talking at cross purposes though . . . .

new6ton

Gold Member
Whether you increase the number of frames by 100 or the exposure time by 100 the signal to noise gets better by 10:

1. if 100 frames the signal stays same but the averaged noise reduced by 10
2. if exposure x100 then signal x100 but noise x10
This assumes you are not saturating any detectors.
No fundamental difference otherwise
Is the above also called stacking? So one can say frames in spectroscopy exposures were stacked too?

hutchphd

We could be talking at cross purposes though .

I may well be misusing the term although I don't believe I used it without seeing it previously in this context.
My comment shows a simple case where naturally occurring noise actually improves the result because it allows precision greater than digitization granularity. The inherent signal to noise does not matter...only that the noise be "at least as large" as the digitization step. I believe we are talking about the same effect....mine was the simplest application.

hutchphd

s the above also called stacking? So one can say frames in spectroscopy exposures were stacked too?
By stacking I mean averaging images on a pixel by pixel basis. The question was the possible differences between a single 100s exposure vs a stack of 100 exposures of 1s. My contention is that fundamental signal to noise expectations are the same. There are interesting deiails!

Tom.G

possible differences between a single 100s exposure vs a stack of 100 exposures of 1s. My contention is that fundamental signal to noise expectations are the same.
Let us try this view.

• The stars tend to be in the same place on the surface of the imager, when they are not, the stacking process tries to shift or warp the image make them so.
• The noise is random, with some small percentage of pixels generating an extraneous signal at any given time.
• With a single long exposure, the number of pixels generating noise during the exposure interval increases with time, tending towards a uniform background or fog.
• The stacking process tends to average the value of each pixel over the set of exposures.

With the above process, since any individual noise pixel is usually off, the average noise value assigned to a given pixel is proportional to 1/√N, where N = {No. of Frames}.

The pixels containing star images, being On for most of the frames, are thus averaged to much higher values.

The above applies to noise generated in the imager (CCD) and in the analog processing string following it. It also applies to things like an airplane crossing the field of view. It does not apply to static artifacts like sky glow, city lights illuminating clouds, or a dirty lens or CCD.

Hope this helps a bit.

Cheers,
Tom

sophiecentaur

Gold Member
My comment shows a simple case where naturally occurring noise actually improves the result because it allows precision greater than digitization granularity.
I see we are talking about the same application of dither. But that is not the same application as when the noise level is several quantising levels (as in astrophotography) and the requirement of the image is not necessarily precision but actual detection of a low level feature. The stacking process requires a greater number of quantisation levels in the calculation in order to achieve the improvement. But the resulting image can be 'power stretched' to be handled by the display or for aesthetics.

Let us try this view.

• The stars tend to be in the same place on the surface of the imager, when they are not, the stacking process tries to shift or warp the image make them so.
• The noise is random, with some small percentage of pixels generating an extraneous signal at any given time.
• With a single long exposure, the number of pixels generating noise during the exposure interval increases with time, tending towards a uniform background or fog.
• The stacking process tends to average the value of each pixel over the set of exposures.

With the above process, since any individual noise pixel is usually off, the average noise value assigned to a given pixel is proportional to 1/√N, where N = {No. of Frames}.

The pixels containing star images, being On for most of the frames, are thus averaged to much higher values.

The above applies to noise generated in the imager (CCD) and in the analog processing string following it. It also applies to things like an airplane crossing the field of view. It does not apply to static artifacts like sky glow, city lights illuminating clouds, or a dirty lens or CCD.

Hope this helps a bit.

Cheers,
Tom
I'd agree with most of that. There are two other issues. The achievable dynamic range / bit depth will be greater with stacking than with long exposures (ten times for a stack of ten images). Also, a long exposure achieves just one averaging process (rms - or whatever ) whereas stacking of each pixel can use whatever algorithm best suits the noise characteristic of the electronics and other varying inputs. As you say, the effect of spreading of the star images is also a factor (star trails and atmospheric distortions). Guiding can help with that (and also, registration algorithms can be applied to each 'subframe'.
I have yet to understand how this applies to situations with very low level signals, down at the threshold of the sensor. (I do not have any cooling on my cheapo cameras)

Drakkith

Staff Emeritus
2018 Award
Stacking is more use than a much longer exposure time. The explanation, as I understand it, is to do with the range of sensor values - which limits the contrast ratio for any single exposure. Say the sensor has 8 bit resolution and you stack many frames, the effective number of bits per pixel will increase due to the arithmetic. Faint objects will appear 'out of nowhere' because they (coherent signals) become significant compared with the average of random noise but, at the same time, the brightest objects are not burned out..
Not only does stacking help prevent 'burnout' of parts of the image that are very bright (such as very bright stars), it also means that anything that affects the camera/telescope during the exposure process can be discarded without completely throwing out all the data. If I bump my telescope while imaging, I just throw out that single image and use the other hundred.

The term "dithering" is a modern one and I'm not sure that it does anything but to suppress the visibility of low frequency noise by modulating it up to higher frequencies.
I don't know the technical explanation for using dithering, but I've always used it to ensure that sensor artifacts aren't impressed into the final image. Any sensor will have pixels that are less responsive than others or spots where the thermal noise is much higher than average. Dithering spreads these spots out in your final image so that they don't show up. Even stacking and lots of image processing can still leave hints of these artifacts in the final image if you don't dither.

1. Suppose the pixel digitizes on a 4 bit scale 0-15. Let the "true" data value be 10.4.
2. In the absence of noise, the measurement will be 10 and repeated frames will all yield 10 for that pixel
3. If random noise of say rms=1 is present then that pixel will sometimes be 10 and sometimes be 11. The average of many frames will be close to 10.4
That is what I was talking about for "dither". It is quite distinct from the typical √n behavior associated with gaussian noise and repeat measurement. I'm uncertain as to whether this is what you were describing for dither.
If your 'true' value is 10.4, then that requires that, eventually, the detector detects more than 10 photons, since light itself is quantized. So some portion of your frames will have 11 counts instead of 10, even in our perfects scenario here and the average of many frames would give you something approaching 10.4 counts.

In addition, if you mean that the original signal has RMS=1, then remember that noise means that the pixel is sometimes higher than average and sometimes lower than average. If you take a perfect signal of 10 and add 1 RMS of noise to it, the average is still 10. For it to be 11 you'd have to take a 1 count signal with 1 rms noise in that new signal and add it to the image. Noise is just a random variation, so adding noise doesn't add any signal.

The achievable dynamic range / bit depth will be greater with stacking than with long exposures (ten times for a stack of ten images). Also, a long exposure achieves just one averaging process (rms - or whatever ) whereas stacking of each pixel can use whatever algorithm best suits the noise characteristic of the electronics and other varying inputs. As you say, the effect of spreading of the star images is also a factor (star trails and atmospheric distortions). Guiding can help with that (and also, registration algorithms can be applied to each 'subframe'.
I have yet to understand how this applies to situations with very low level signals, down at the threshold of the sensor. (I do not have any cooling on my cheapo cameras)
From a purely statistical standpoint, stacking frames instead of taking one long exposure is actually worse for getting your SNR up and picking out those very faint details that have inherently very low SNR. This is because the sensor itself adds noise during readout. Gathering light over a single long exposure can push the signal of that faint detail up however far you want (given sensor and time limitations and such) before adding in this extra noise. So if your readout noise is 5 rms, a single exposure might bring the signal of your faint target up high enough to be clearly seen over this 5 rms noise. But stacking many shorter exposures means that your target is always buried in the noise. In other words, a single long exposure keeps the readout noise set while increasing the signal, while stacking short exposures subjects the noise to the square-root improvement law.

Imagine we have perfect, noise-free images. In our long exposure, the signal of the faint target is 1000, while in our short exposure our target's signal is 10. Now let's add our 5 rms of noise to the image, giving us 1000 with 5 rms readout noise and 10 with 5 rms readout noise. The noise is half the signal in the short exposures, but no biggie, we should be able to average out the noise, right?

To get our total signal up to the 1000 we need to match our single long exposure we need to stack 100 short exposures. Since the signal here is noise free, we just average them and we still get 10. For the readout noise, we get $σ=\sqrt{\frac{R^2}{n}} = \sqrt{\frac{5^2}{100}}=\sqrt{\frac{25}{100}}=0.5$ Our short exposures average out to give us a signal of 10 with 0.5 rms of noise.

To compare our long exposure we simply divide both the signal and the noise by 100, giving us a signal of 10 with 0.05 rms noise. In terms of SNR, our long exposure has an SNR of 200, while our stacked image has an SNR of 20. So taking one long exposure has, in this case, resulted in a factor of 10 improvement in the SNR of the target over stacking shorter images, despite the total imaging time being identical. Obviously the real benefit will be less due to other sources of noise.

The reason we stack images is because of all the other benefits we gain over taking long exposures, some of which have already been discussed here.

Drakkith

Staff Emeritus
2018 Award
Is the above also called stacking? So one can say frames in spectroscopy exposures were stacked too?
For astronomy, almost certainly given that many targets are extremely dim and you're spreading this dim light out even further into a spectrum.

hutchphd

In addition, if you mean that the original signal has RMS=1, then remember that noise means that the pixel is sometimes higher than average and sometimes lower than average. If you take a perfect signal of 10 and add 1 RMS of noise to it, the average is still 10. For it to be 11 you'd have to take a 1 count signal with 1 rms noise in that new signal and add it to the image. Noise is just a random variation, so adding noise doesn't add any signal.
Perhaps you misunderstand the premise. I am not talking photon counting but digitization What if the "perfect" signal corresponds to 10.4 digitization units. Then the perfect noiseless camera will always report 10 for that pixel and averaging will not help. Add in some noise and the average will get closer to 10.4 as previously explained.

Photon counting is a different subject but I have also done that for fluorescence work. A cooled CCD is a remarkable tool!!

"Counts per Frame in CCD"

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving