How are counts per frame calculated in CCD for spectrometry?

In summary: The more frames you take, the more data points you have to work with, which can help reduce noise in the final image. Additionally, taking multiple frames and stacking them can help reduce the effects of atmospheric turbulence, resulting in a sharper image. This is achieved by aligning and combining the frames to create a single, higher quality image. Therefore, the more frames you have, the better the image quality can potentially be.
  • #36
sophiecentaur said:
The best exposure time is where the brightest object in the frame is near 'FF' and that dictates the faintest nearby object that you will detect. Stacking will give you something like the equivalent of ten times the exposure time without burning out Vega.

Indeed. You can stack to get whatever equivalency you want. And you can also let certain parts of your image burn out if you don't care about the details of whatever is imaging onto those pixels. So we can let Vega burn out in the core if we want and keep right on imaging. This is especially true if your CCD has anti-blooming capability.

sophiecentaur said:
Aeroplanes and the like can also be dealt with - sometimes very well if the algorithm is smart enough. A long exposure will just leave you with the plane's streak at around 1/10 brightness which is very visible.

Sigma-clip averaging is also an option. In this case the algorithm looks at those pixels, determines that they are outside a 2-3 sigma range of the average, and simply gets rid of them so they aren't averaged into the final image.

sophiecentaur said:
Isn't that best dealt with using 'Flats' (to introduce a further nerdy idea)?

Flats and darks can help, but they still don't get rid of it completely. For small deviations you're usually just fine with flats and darks, but if you have pixels which are significantly off from the norm then you'll usually do well from dithering.

Note that dithering in astrophotography might have a different meaning than in other areas. See https://www.skyandtelescope.com/astronomy-blogs/astrophotography-jerry-lodriguss/why-how-dither-astro-images/ on dithering and why it is used.
 
Engineering news on Phys.org
  • #37
Drakkith said:
Why not?
We must somehow be talking past each other on this. Let me give it one more try. Let me take n measurements
  1. Suppose my source emits power of 10.499 units
  2. Suppose my digitizer rounds off to whole numbers of that unit
  3. If the RMS=0, then the system will always report "10" and I will be in error by ~ 0.5 no matter how I average
  4. If the RMS ~1 then the system will randomly report either 10 or 11 in equal number as n gets large and so I will report ~10.5 as my result.
And so the average is made better than the binning error because of the noise. The general case is obvious I trust.
 
  • #38
Drakkith said:
Note that dithering in astrophotography might have a different meaning than in other areas.
= actually shaking the scope or sensor (spatial domain). I have seen it as an additional option in guiding software but ignored it so far. One of these days I may have a go.

Dithering in digital processing is done by adding to the signal in the time domain.

'Divided by a common language' :smile:
 
  • Like
Likes hutchphd
  • #39
So dithering is "shaky" stacking...
 
  • Like
Likes sophiecentaur
  • #40
hutchphd said:
So dithering is "shaky" stacking...
But wouldn't the subs all be re-registered by the software before stacking? Nebulosity is a program that I have spent a bit of time on and the workflow seems to involve registering first and then stacking.
 
  • #41
Drakkith said:
What about when the signal is sometimes 8.79? Or 9.3? Or 6.5?
That would be a common occurrence when the seeing is not too good. The level of readout noise would not, perhaps, relate to exposure length in a simple way but the other noise sources are 'linear with time'(?) and the benefit of exposure time would be √n based.
I have been reading all the obvious sources from Google and also, speaking from experience, the noise in the image from a sensor tends to be greater than just one quantisation level. ON my CMOS DSLR and my ZWO cheapy can be brought up by changing the gain and stacking will take it down below the levels of faint stars. Stacking works when a longer exposure will just burn out some stars.
 
  • #42
hutchphd said:
If the RMS ~1 then the system will randomly report either 10 or 11 in equal number as n gets large and so I will report ~10.5 as my result.

Okay, I think I get what you're saying now. I was stuck on your explanation that the noise meant that values were always added to the signal, which I still don't think is right, but it looks like even accounting for that you're still right about what the signal would be. Check the following:

If the signal has a value of ##\dot x## then an individual sample of the signal is ##\dot x ± \sqrt{\dot x}## where ##\dot x## represents the mean value of the signal and ##\sqrt{\dot x}## represents the standard deviation.

Given a mean signal value of 10.499, any particular sample (prior to digitization) may be larger or smaller, and we would expect the samples to follow a normal distribution where roughly two-thirds of the of the samples (68.3%) lie within ##±\sqrt{10.499}## of the signal. The other third of the time the samples should be even further away from the mean. Since this deviation is symmetrical about the mean value, we can expect to find half of the samples above the mean and half the samples below.

With enough samples the average would approach 10.499 prior to digitization. But, nearly half the time the value is 10.5 or above and is rounded up to 11 (or greater) during digitization. Averaging would give you roughly the right answer for the signal.

This reminds me of how they found out in WW2 that their mechanical computing devices used on bombers were actually more accurate in flight than on the ground. It turned out that the vibrations were helping to keep gears from sticking and whatnot, resulting in more accuracy.

hutchphd said:
So dithering is "shaky" stacking...

That's a succinct way of putting it.
 
  • #43
sophiecentaur said:
But wouldn't the subs all be re-registered by the software before stacking? Nebulosity is a program that I have spent a bit of time on and the workflow seems to involve registering first and then stacking.

Sure. And that's why dithering is used. Dithering just ensures that each sub is in a slightly different position so that any bad pixels or spurious signals are spread over multiple pixels instead of their inaccuracy just stacking up all on the same pixel, as would happen if you had near-perfect guidance and no dithering.
 
  • Like
Likes sophiecentaur
  • #44
hutchphd said:
By stacking I mean averaging images on a pixel by pixel basis. The question was the possible differences between a single 100s exposure vs a stack of 100 exposures of 1s. My contention is that fundamental signal to noise expectations are the same. There are interesting deiails!

I think there is difference between single 100s exposure vs a stack of 100 exposures of 1s. Here is why.

If the exposure is too fast. You don't get enough signal over noise. So even if you make 100 frames of it. You won't get the proper signal! There must be at least the minimum exposure before the comparison is even valid.

Agreements, objections anyone?
 
  • #45
new6ton said:
If the exposure is too fast. You don't get enough signal over noise. So even if you make 100 frames of it. You won't get the proper signal! There must be at least the minimum exposure before the comparison is even valid.

A stack of 100 one-second exposures has the exact same amount of signal as a single 100-second exposure. The difference is that there are sources of noise that are not time dependent, and these can overwhelm a very short exposure with a very small signal. But, if the signal is very high, such as filming video outside during the daytime, you're just fine. This noise is negligible compared to the signal and the difference between the stack of short exposures and the single long exposure is minimal.
 
  • Like
Likes hutchphd
  • #46
I agree. To reiterate; if the noise is coherent (I often call this "error" not "noise") then multiple exposures will allow a clever scheme to subtract it off. If the noise is random you are no better off (assuming the long exposure doesn't saturate the detector somehow).

Drakkith said:
With enough samples the average would approach 10.499 prior to digitization. But, nearly half the time the value is 10.5 or above and is rounded up to 11 (or greater) during digitization. Averaging would give you roughly the right answer for the signal.
Yes! I'm a little surprised you haven't run into this before...I think it is an interesting result. Thanks for the response.
 
  • Like
Likes sophiecentaur
  • #47
hutchphd said:
Yes! I'm a little surprised you haven't run into this before...

Never had any reason to look into it before now. 😉
 
  • #48
Drakkith said:
This reminds me of how they found out in WW2 that their mechanical computing devices used on bombers were actually more accurate in flight than on the ground. It turned out that the vibrations were helping to keep gears from sticking and whatnot, resulting in more accuracy.
I must take my washing machine out in the garden and run it next to my mount at fast spin. :wink:
 
  • Like
Likes Drakkith
  • #49
hutchphd said:
(assuming the long exposure doesn't saturate the detector somehow).
Isn't that the point, though? When people are after impressive photos (not necessarily the pure data), they will often want to show faint features in the proximity of a bright object. Then, the resulting increased bit depth of stacked images can give that option by tinkering with the image gamma. The advice I have read is to use as high gain (aka 'iso') as possible and that can bring the histogram maximum well up the range. Natch, you have to be careful about burnout on a short subframe but you then get a factor of ten to a hundred headroom protection.
I do take your point about coherent 'interference' vs noise and that is 'all good' because you have already got the benefit of an effectively longer exposure for random effects. Smart processing can achieve such a lot.
 
  • #50
Drakkith said:
A stack of 100 one-second exposures has the exact same amount of signal as a single 100-second exposure.
That's true but, without a significant 'light bucket' scope, you would never be using 1s. You'd be using the sort of individual exposure length that is near clipping (assuming your tracking / guiding is up to it.
It's such a multifaceted business and I find the brain works at only ten percent when out in the cold and dark, after getting the equipment all together and aligned. Yes - I know - get an observatory - not really practical for me.
 
  • #51
Can you convince yourself that the cold is improving the dark current on the CCD? Might improve your stoicism...
 
  • Haha
Likes sophiecentaur
  • #52
Drakkith said:
A stack of 100 one-second exposures has the exact same amount of signal as a single 100-second exposure. The difference is that there are sources of noise that are not time dependent, and these can overwhelm a very short exposure with a very small signal. But, if the signal is very high, such as filming video outside during the daytime, you're just fine. This noise is negligible compared to the signal and the difference between the stack of short exposures and the single long exposure is minimal.

Can you give some actual examples of applictions with sources of noise that are not time-dependent?
 
  • #53
new6ton said:
Can you give some actual examples of applictions with sources of noise that are not time-dependent?

I already gave one in an earlier post. The readout noise from the CCD is not time dependent.
 
  • #54
sophiecentaur said:
That's true but, without a significant 'light bucket' scope, you would never be using 1s.

Sure, but the principle is the same for any time frame. There are plenty of people who don't use autoguiding, so they have to set their exposures for perhaps 15-30 seconds at most to avoid accumulating tracking errors in each frame.
 
  • #55
Drakkith said:
I already gave one in an earlier post. The readout noise from the CCD is not time dependent.

But is it not all digital cameras use CCD? You said "But, if the signal is very high, such as filming video outside during the daytime, you're just fine. This noise is negligible compared to the signal and the difference between the stack of short exposures and the single long exposure is minimal.". Filming video uses CCD. and it is time dependent. How do you reconcile it with the readout noise from CCD which is not time dependent? Filming video using CCD engages the readout noise too.
 
  • #56
new6ton said:
But is it not all digital cameras use CCD?
This is a good point. Many modern cameras use CMOS sensors. I was at a presentation at E2V (leading sensor manufacturers) and was told that the future is probably CMOS but they are fairly committed to CCD for the near future, at least. CMOS performance is improving all the time.
My Pentax k2s and my ZWO camera are both CMOS.
Drakkith said:
The readout noise from the CCD is not time dependent.
We need to re-examine that statement because I'm not sure what you mean exactly. Are you saying that the error in value that's read out will be the same for any exposure length? If there is a source of random 'error' which is once-per-exposure then more exposures will reduce the effect of that error. If there's merely a constant offset from pixel to pixel then darks and flats would allow that to be eliminated. The readout noise will appear as a variation from pixel to pixel, over the image and that will be less with stacking (no?). The spectrum of the noise is relevant, I imagine but I don't know enough about the detail to say which way it would go.
 
  • #57
Drakkith said:
There are plenty of people who don't use autoguiding, so they have to set their exposures for perhaps 15-30 seconds at most to avoid accumulating tracking errors in each frame.
Yes - that was me, until recently but I would say that the optimum method could easily be different and getting details out of the bottom few quantisation levels is a forlorn hope. (I was actually responding initially to your 1s exposure example which I would say would greatly limit the choice of objects.)
I have recently joined Havering Astronomical Society (Essex, UK), in which a group of enthusiastic AP'ers have been very active, lately. I feel put to shame by their willingness to spend so much time and money. But they are collecting an impressive set of images of Messier Objects. Here. Worth looking at, I would say - especially as nearly all of their sites are not far from Central London.
 
  • Like
Likes Drakkith
  • #58
new6ton said:
But is it not all digital cameras use CCD? You said "But, if the signal is very high, such as filming video outside during the daytime, you're just fine. This noise is negligible compared to the signal and the difference between the stack of short exposures and the single long exposure is minimal.". Filming video uses CCD. and it is time dependent. How do you reconcile it with the readout noise from CCD which is not time dependent? Filming video using CCD engages the readout noise too.

All digital sensors have some readout noise as far as I know. And I don't know what you mean by about the video. The readout noise is the same regardless of your exposure time. That's what I mean by it not being time dependent.

sophiecentaur said:
We need to re-examine that statement because I'm not sure what you mean exactly. Are you saying that the error in value that's read out will be the same for any exposure length? If there is a source of random 'error' which is once-per-exposure then more exposures will reduce the effect of that error. If there's merely a constant offset from pixel to pixel then darks and flats would allow that to be eliminated. The readout noise will appear as a variation from pixel to pixel, over the image and that will be less with stacking (no?). The spectrum of the noise is relevant, I imagine but I don't know enough about the detail to say which way it would go.

I mean that at readout the sensor adds a small amount of noise that is always the same (the RMS value). I don't have my reference book handy, but I think it's just Poisson noise, so stacking will reduce this noise just like it does other sources of noise.
 
  • #59
@Drakkith I get you about the noise and the sample will be a single value +/- to the electron count, according to the instantaneous (memoryless ?
) noise value at sampling instant. I think it would be Poisson or even 1/f ?
 
  • #60
sophiecentaur said:
The readout noise will appear as a variation from pixel to pixel, over the image and that will be less with stacking (no?)
In my understanding that is not the best way to think about it. Read noise is random for every "read" event: this is true for reading successive pixels in a frame or multiple reads of a particular pixel in different frames. So it produces both spatial RMS and temporal RMS of the same size. The total RMS noise for N reads will grow as √N. So if N frames are stacked (and averaged)
  1. This then adds additionally to the overall temporal RMS (relative to a single long exposure where N=1) as @Drakkith pointed out for each pixel
  2. This allows the spatial variability to be reduced by √N in the stack average
The spatial variability reduction is usually worth the small price of additional read noise (particularly for images...less so for low level quantification )
Note that the read noise is likely not centered at zero (maybe Poisson maybe electric truncation) so an additional background subtraction of a flat field average can be useful.
 
  • #61
Hmmm. I thought readout noise was centered at zero, but I could be mistaken.
 
  • #62
I think it depends upon the CCD but I have never worried about it much. Certainly the background can be subtracted off as required.
 
  • #63
When the signal is comparable or close to the read out noise that more exposure can spell a difference? Remember I wrote this thread about Counts per Frame in CCD in visible as well as IR Spectroscopy. So please share your experience in IR spectroscopy now. With these feature soon available in ordinary smartphone, and we can scan for say date rape drugs in drinks or how sweet is the apple at grocery with our smartphone. It would be handy knowledge.
 
  • #64
new6ton said:
When the signal is comparable or close to the read out noise that more exposure can spell a difference?

More exposures always improves SNR. It's just that when your signal is very strong the SNR is already very, very high and you just don't need to improve it.

new6ton said:
Remember I wrote this thread about Counts per Frame in CCD in visible as well as IR Spectroscopy. So please share your experience in IR spectroscopy now.

There's no difference between the two in terms of noise and how sensors work. Everything that's been discussed here applies to both IR and visible.
 
  • #65
ccd sensitivity.gif


Is readout noise dependent on wavelength?

Anyway. I just have one more question and don't want to post in a new thread to save pages.

Can you give example of an actual photo where the sensitivity response of the CCD is flat (uniform from red to violet) instead of curve? And how an image would look like if your eyes have flat response too able to see visible light with equal sensitivity?
 
  • #66
Red and blue wavelengths human sensitivity is lower than greens. So a flat response would boost reds and blues, giving a strong magenta tint to all images.
But that’s an unrealistic answer to an unrealistic question, I think.
 
  • #67
new6ton said:
Is readout noise dependent on wavelength?

No, because readout noise has nothing to do with wavelengths. It has to do with the sensor's ability to accurately count electrons shuffled in from each photosite (pixel).

new6ton said:
Can you give example of an actual photo where the sensitivity response of the CCD is flat (uniform from red to violet) instead of curve?

No such CCD's exist. All have some sort of curve in their response sensitivity.
 
  • Like
Likes sophiecentaur
  • #68
new6ton said:
And how an image would look like if your eyes have flat response too able to see visible light with equal sensitivity?

Well, we don't, so we can't say what it would look like if we did because our brains would also interpret this differently. To throw out a random guess, I'd say that everything would look very much the same as it does now.
 
  • Like
Likes sophiecentaur
  • #69
The quantum efficiency of all both CCD and CMOS detectors peaks slightly above 900nm. Any other wavelength will be intrinsically noisier. Whether that noise is significant depends upon a host of factors including details of fabrication and signal strength. etc etc.etc
 

Similar threads

  • Introductory Physics Homework Help
Replies
3
Views
2K
  • Advanced Physics Homework Help
Replies
11
Views
1K
Replies
5
Views
1K
Replies
11
Views
3K
  • Special and General Relativity
Replies
24
Views
2K
  • Introductory Physics Homework Help
Replies
3
Views
175
  • Classical Physics
Replies
10
Views
1K
Replies
152
Views
5K
Replies
14
Views
1K
Replies
6
Views
1K
Back
Top