How are counts per frame calculated in CCD for spectrometry?

  • Thread starter Thread starter new6ton
  • Start date Start date
  • Tags Tags
    Ccd Frame Per
Click For Summary
SUMMARY

The discussion centers on the calculation of counts per frame in Charge-Coupled Device (CCD) imaging for visible and infrared spectrometry. It is established that counts are determined by the measured signal level in electrons, which correlates to the number of photons detected per pixel, rather than merely the exposure time. The conversation also highlights the benefits of stacking multiple frames to enhance image quality by reducing noise, particularly in turbulent atmospheric conditions, and discusses the statistical principle that noise decreases as the square root of the number of frames averaged.

PREREQUISITES
  • Understanding of CCD technology and its application in spectrometry
  • Knowledge of photon detection and signal calibration in electrons
  • Familiarity with image stacking techniques and their impact on noise reduction
  • Basic principles of statistics related to noise and signal processing
NEXT STEPS
  • Research "CCD signal calibration techniques" to understand how signal levels are quantified.
  • Explore "image stacking methods in astronomy" to learn about noise reduction techniques.
  • Investigate "Poisson noise in spectrometry" to grasp the statistical principles affecting signal quality.
  • Study "frame averaging in digital imaging" to see its applications beyond astronomy.
USEFUL FOR

Researchers, astronomers, and spectroscopists interested in improving image quality and understanding the principles of signal detection and noise reduction in CCD imaging systems.

  • #31
hutchphd said:
This is not that difficult.
  1. the actual signal is 10.4
  2. with noise the signal is sometimes 11.1234567
  3. this will be reported as 11
  4. the average will be >10
QED?

What about when the signal is sometimes 8.79? Or 9.3? Or 6.5?
 
Engineering news on Phys.org
  • #32
Drakkith said:
There's nothing incorrect about it. The assumption was that we had perfect noise-free signals and we were adding in readout noise. Readout noise is not time-dependent so it isn't affected by exposure time.
OK then the example is correct but irrelevant. In the real world the noise grows with √time
 
  • #33
Drakkith said:
What about when the signal is sometimes 8.79? Or 9.3? Or 6.5?
You said the average would always be 10. It is not
 
  • #34
hutchphd said:
OK then the example is correct but irrelevant. In the real world the noise grows with √time

It is entirely relevant because it illustrates how readout noise affects the quality of an image. In general, it is better to take longer exposures than shorter exposures all else being equal. Otherwise the readout noise makes a larger contribution to the image than necessary, degrading image quality.

You are correct that the noise in the real-world signal grows as the square root of the exposure time, and the distinction between how readout noise scales (i.e. it doesn't) and how noise in the signal scales is the entire point of that section of my post!

hutchphd said:
You said the average would always be 10. It is not

Why not?
 
  • #35
Drakkith said:
In general, it is better to take longer exposures than shorter exposures all else being equal.
The operative thing here is "all else being equal". The contrast range is always limited by the peak signal level from the sensor and the quantising step size. If we are talking Astrophotography, we would (almost) never stop down our expensive telescope aperture so when Vega burns out our image, we have exposed for too long. The best exposure time is where the brightest object in the frame is near 'FF' and that dictates the faintest nearby object that you will detect. Stacking will give you something like the equivalent of ten times the exposure time without burning out Vega.
At the same time the faint coherent objects will appear at ten times the level of a single exposure and the noise and other changing quantities will be averaged out by some algorithm.
Aeroplanes and the like can also be dealt with - sometimes very well if the algorithm is smart enough. A long exposure will just leave you with the plane's streak at around 1/10 brightness which is very visible.
Drakkith said:
Any sensor will have pixels that are less responsive than others or spots where the thermal noise is much higher than average.
Isn't that best dealt with using 'Flats' (to introduce a further nerdy idea)? And Dark frames help too, of course.
 
  • #36
sophiecentaur said:
The best exposure time is where the brightest object in the frame is near 'FF' and that dictates the faintest nearby object that you will detect. Stacking will give you something like the equivalent of ten times the exposure time without burning out Vega.

Indeed. You can stack to get whatever equivalency you want. And you can also let certain parts of your image burn out if you don't care about the details of whatever is imaging onto those pixels. So we can let Vega burn out in the core if we want and keep right on imaging. This is especially true if your CCD has anti-blooming capability.

sophiecentaur said:
Aeroplanes and the like can also be dealt with - sometimes very well if the algorithm is smart enough. A long exposure will just leave you with the plane's streak at around 1/10 brightness which is very visible.

Sigma-clip averaging is also an option. In this case the algorithm looks at those pixels, determines that they are outside a 2-3 sigma range of the average, and simply gets rid of them so they aren't averaged into the final image.

sophiecentaur said:
Isn't that best dealt with using 'Flats' (to introduce a further nerdy idea)?

Flats and darks can help, but they still don't get rid of it completely. For small deviations you're usually just fine with flats and darks, but if you have pixels which are significantly off from the norm then you'll usually do well from dithering.

Note that dithering in astrophotography might have a different meaning than in other areas. See https://www.skyandtelescope.com/astronomy-blogs/astrophotography-jerry-lodriguss/why-how-dither-astro-images/ on dithering and why it is used.
 
  • #37
Drakkith said:
Why not?
We must somehow be talking past each other on this. Let me give it one more try. Let me take n measurements
  1. Suppose my source emits power of 10.499 units
  2. Suppose my digitizer rounds off to whole numbers of that unit
  3. If the RMS=0, then the system will always report "10" and I will be in error by ~ 0.5 no matter how I average
  4. If the RMS ~1 then the system will randomly report either 10 or 11 in equal number as n gets large and so I will report ~10.5 as my result.
And so the average is made better than the binning error because of the noise. The general case is obvious I trust.
 
  • #38
Drakkith said:
Note that dithering in astrophotography might have a different meaning than in other areas.
= actually shaking the scope or sensor (spatial domain). I have seen it as an additional option in guiding software but ignored it so far. One of these days I may have a go.

Dithering in digital processing is done by adding to the signal in the time domain.

'Divided by a common language' :smile:
 
  • Like
Likes hutchphd
  • #39
So dithering is "shaky" stacking...
 
  • Like
Likes sophiecentaur
  • #40
hutchphd said:
So dithering is "shaky" stacking...
But wouldn't the subs all be re-registered by the software before stacking? Nebulosity is a program that I have spent a bit of time on and the workflow seems to involve registering first and then stacking.
 
  • #41
Drakkith said:
What about when the signal is sometimes 8.79? Or 9.3? Or 6.5?
That would be a common occurrence when the seeing is not too good. The level of readout noise would not, perhaps, relate to exposure length in a simple way but the other noise sources are 'linear with time'(?) and the benefit of exposure time would be √n based.
I have been reading all the obvious sources from Google and also, speaking from experience, the noise in the image from a sensor tends to be greater than just one quantisation level. ON my CMOS DSLR and my ZWO cheapy can be brought up by changing the gain and stacking will take it down below the levels of faint stars. Stacking works when a longer exposure will just burn out some stars.
 
  • #42
hutchphd said:
If the RMS ~1 then the system will randomly report either 10 or 11 in equal number as n gets large and so I will report ~10.5 as my result.

Okay, I think I get what you're saying now. I was stuck on your explanation that the noise meant that values were always added to the signal, which I still don't think is right, but it looks like even accounting for that you're still right about what the signal would be. Check the following:

If the signal has a value of ##\dot x## then an individual sample of the signal is ##\dot x ± \sqrt{\dot x}## where ##\dot x## represents the mean value of the signal and ##\sqrt{\dot x}## represents the standard deviation.

Given a mean signal value of 10.499, any particular sample (prior to digitization) may be larger or smaller, and we would expect the samples to follow a normal distribution where roughly two-thirds of the of the samples (68.3%) lie within ##±\sqrt{10.499}## of the signal. The other third of the time the samples should be even further away from the mean. Since this deviation is symmetrical about the mean value, we can expect to find half of the samples above the mean and half the samples below.

With enough samples the average would approach 10.499 prior to digitization. But, nearly half the time the value is 10.5 or above and is rounded up to 11 (or greater) during digitization. Averaging would give you roughly the right answer for the signal.

This reminds me of how they found out in WW2 that their mechanical computing devices used on bombers were actually more accurate in flight than on the ground. It turned out that the vibrations were helping to keep gears from sticking and whatnot, resulting in more accuracy.

hutchphd said:
So dithering is "shaky" stacking...

That's a succinct way of putting it.
 
  • #43
sophiecentaur said:
But wouldn't the subs all be re-registered by the software before stacking? Nebulosity is a program that I have spent a bit of time on and the workflow seems to involve registering first and then stacking.

Sure. And that's why dithering is used. Dithering just ensures that each sub is in a slightly different position so that any bad pixels or spurious signals are spread over multiple pixels instead of their inaccuracy just stacking up all on the same pixel, as would happen if you had near-perfect guidance and no dithering.
 
  • Like
Likes sophiecentaur
  • #44
hutchphd said:
By stacking I mean averaging images on a pixel by pixel basis. The question was the possible differences between a single 100s exposure vs a stack of 100 exposures of 1s. My contention is that fundamental signal to noise expectations are the same. There are interesting deiails!

I think there is difference between single 100s exposure vs a stack of 100 exposures of 1s. Here is why.

If the exposure is too fast. You don't get enough signal over noise. So even if you make 100 frames of it. You won't get the proper signal! There must be at least the minimum exposure before the comparison is even valid.

Agreements, objections anyone?
 
  • #45
new6ton said:
If the exposure is too fast. You don't get enough signal over noise. So even if you make 100 frames of it. You won't get the proper signal! There must be at least the minimum exposure before the comparison is even valid.

A stack of 100 one-second exposures has the exact same amount of signal as a single 100-second exposure. The difference is that there are sources of noise that are not time dependent, and these can overwhelm a very short exposure with a very small signal. But, if the signal is very high, such as filming video outside during the daytime, you're just fine. This noise is negligible compared to the signal and the difference between the stack of short exposures and the single long exposure is minimal.
 
  • Like
Likes hutchphd
  • #46
I agree. To reiterate; if the noise is coherent (I often call this "error" not "noise") then multiple exposures will allow a clever scheme to subtract it off. If the noise is random you are no better off (assuming the long exposure doesn't saturate the detector somehow).

Drakkith said:
With enough samples the average would approach 10.499 prior to digitization. But, nearly half the time the value is 10.5 or above and is rounded up to 11 (or greater) during digitization. Averaging would give you roughly the right answer for the signal.
Yes! I'm a little surprised you haven't run into this before...I think it is an interesting result. Thanks for the response.
 
  • Like
Likes sophiecentaur
  • #47
hutchphd said:
Yes! I'm a little surprised you haven't run into this before...

Never had any reason to look into it before now. 😉
 
  • #48
Drakkith said:
This reminds me of how they found out in WW2 that their mechanical computing devices used on bombers were actually more accurate in flight than on the ground. It turned out that the vibrations were helping to keep gears from sticking and whatnot, resulting in more accuracy.
I must take my washing machine out in the garden and run it next to my mount at fast spin. :wink:
 
  • Like
Likes Drakkith
  • #49
hutchphd said:
(assuming the long exposure doesn't saturate the detector somehow).
Isn't that the point, though? When people are after impressive photos (not necessarily the pure data), they will often want to show faint features in the proximity of a bright object. Then, the resulting increased bit depth of stacked images can give that option by tinkering with the image gamma. The advice I have read is to use as high gain (aka 'iso') as possible and that can bring the histogram maximum well up the range. Natch, you have to be careful about burnout on a short subframe but you then get a factor of ten to a hundred headroom protection.
I do take your point about coherent 'interference' vs noise and that is 'all good' because you have already got the benefit of an effectively longer exposure for random effects. Smart processing can achieve such a lot.
 
  • #50
Drakkith said:
A stack of 100 one-second exposures has the exact same amount of signal as a single 100-second exposure.
That's true but, without a significant 'light bucket' scope, you would never be using 1s. You'd be using the sort of individual exposure length that is near clipping (assuming your tracking / guiding is up to it.
It's such a multifaceted business and I find the brain works at only ten percent when out in the cold and dark, after getting the equipment all together and aligned. Yes - I know - get an observatory - not really practical for me.
 
  • #51
Can you convince yourself that the cold is improving the dark current on the CCD? Might improve your stoicism...
 
  • Haha
Likes sophiecentaur
  • #52
Drakkith said:
A stack of 100 one-second exposures has the exact same amount of signal as a single 100-second exposure. The difference is that there are sources of noise that are not time dependent, and these can overwhelm a very short exposure with a very small signal. But, if the signal is very high, such as filming video outside during the daytime, you're just fine. This noise is negligible compared to the signal and the difference between the stack of short exposures and the single long exposure is minimal.

Can you give some actual examples of applictions with sources of noise that are not time-dependent?
 
  • #53
new6ton said:
Can you give some actual examples of applictions with sources of noise that are not time-dependent?

I already gave one in an earlier post. The readout noise from the CCD is not time dependent.
 
  • #54
sophiecentaur said:
That's true but, without a significant 'light bucket' scope, you would never be using 1s.

Sure, but the principle is the same for any time frame. There are plenty of people who don't use autoguiding, so they have to set their exposures for perhaps 15-30 seconds at most to avoid accumulating tracking errors in each frame.
 
  • #55
Drakkith said:
I already gave one in an earlier post. The readout noise from the CCD is not time dependent.

But is it not all digital cameras use CCD? You said "But, if the signal is very high, such as filming video outside during the daytime, you're just fine. This noise is negligible compared to the signal and the difference between the stack of short exposures and the single long exposure is minimal.". Filming video uses CCD. and it is time dependent. How do you reconcile it with the readout noise from CCD which is not time dependent? Filming video using CCD engages the readout noise too.
 
  • #56
new6ton said:
But is it not all digital cameras use CCD?
This is a good point. Many modern cameras use CMOS sensors. I was at a presentation at E2V (leading sensor manufacturers) and was told that the future is probably CMOS but they are fairly committed to CCD for the near future, at least. CMOS performance is improving all the time.
My Pentax k2s and my ZWO camera are both CMOS.
Drakkith said:
The readout noise from the CCD is not time dependent.
We need to re-examine that statement because I'm not sure what you mean exactly. Are you saying that the error in value that's read out will be the same for any exposure length? If there is a source of random 'error' which is once-per-exposure then more exposures will reduce the effect of that error. If there's merely a constant offset from pixel to pixel then darks and flats would allow that to be eliminated. The readout noise will appear as a variation from pixel to pixel, over the image and that will be less with stacking (no?). The spectrum of the noise is relevant, I imagine but I don't know enough about the detail to say which way it would go.
 
  • #57
Drakkith said:
There are plenty of people who don't use autoguiding, so they have to set their exposures for perhaps 15-30 seconds at most to avoid accumulating tracking errors in each frame.
Yes - that was me, until recently but I would say that the optimum method could easily be different and getting details out of the bottom few quantisation levels is a forlorn hope. (I was actually responding initially to your 1s exposure example which I would say would greatly limit the choice of objects.)
I have recently joined Havering Astronomical Society (Essex, UK), in which a group of enthusiastic AP'ers have been very active, lately. I feel put to shame by their willingness to spend so much time and money. But they are collecting an impressive set of images of Messier Objects. Here. Worth looking at, I would say - especially as nearly all of their sites are not far from Central London.
 
  • Like
Likes Drakkith
  • #58
new6ton said:
But is it not all digital cameras use CCD? You said "But, if the signal is very high, such as filming video outside during the daytime, you're just fine. This noise is negligible compared to the signal and the difference between the stack of short exposures and the single long exposure is minimal.". Filming video uses CCD. and it is time dependent. How do you reconcile it with the readout noise from CCD which is not time dependent? Filming video using CCD engages the readout noise too.

All digital sensors have some readout noise as far as I know. And I don't know what you mean by about the video. The readout noise is the same regardless of your exposure time. That's what I mean by it not being time dependent.

sophiecentaur said:
We need to re-examine that statement because I'm not sure what you mean exactly. Are you saying that the error in value that's read out will be the same for any exposure length? If there is a source of random 'error' which is once-per-exposure then more exposures will reduce the effect of that error. If there's merely a constant offset from pixel to pixel then darks and flats would allow that to be eliminated. The readout noise will appear as a variation from pixel to pixel, over the image and that will be less with stacking (no?). The spectrum of the noise is relevant, I imagine but I don't know enough about the detail to say which way it would go.

I mean that at readout the sensor adds a small amount of noise that is always the same (the RMS value). I don't have my reference book handy, but I think it's just Poisson noise, so stacking will reduce this noise just like it does other sources of noise.
 
  • #59
@Drakkith I get you about the noise and the sample will be a single value +/- to the electron count, according to the instantaneous (memoryless ?
) noise value at sampling instant. I think it would be Poisson or even 1/f ?
 
  • #60
sophiecentaur said:
The readout noise will appear as a variation from pixel to pixel, over the image and that will be less with stacking (no?)
In my understanding that is not the best way to think about it. Read noise is random for every "read" event: this is true for reading successive pixels in a frame or multiple reads of a particular pixel in different frames. So it produces both spatial RMS and temporal RMS of the same size. The total RMS noise for N reads will grow as √N. So if N frames are stacked (and averaged)
  1. This then adds additionally to the overall temporal RMS (relative to a single long exposure where N=1) as @Drakkith pointed out for each pixel
  2. This allows the spatial variability to be reduced by √N in the stack average
The spatial variability reduction is usually worth the small price of additional read noise (particularly for images...less so for low level quantification )
Note that the read noise is likely not centered at zero (maybe Poisson maybe electric truncation) so an additional background subtraction of a flat field average can be useful.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 15 ·
Replies
15
Views
2K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 11 ·
Replies
11
Views
4K
  • · Replies 24 ·
Replies
24
Views
5K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 152 ·
6
Replies
152
Views
11K
Replies
1
Views
5K