How are counts per frame calculated in CCD for spectrometry?

  • Thread starter Thread starter new6ton
  • Start date Start date
  • Tags Tags
    Ccd Frame Per
AI Thread Summary
Counts per frame in CCD spectrometry are determined by the measured signal level of photons hitting each pixel, not solely by exposure time. Longer exposure times can lead to higher counts, but the number of frames taken also impacts image quality due to atmospheric turbulence, which can blur images; shorter exposures can be stacked to create sharper images. Stacking, which averages multiple frames, can improve signal-to-noise ratios, especially in noisy environments like Raman spectroscopy. The noise reduction follows a principle where it decreases as the square root of the number of frames, enhancing the clarity of faint signals. Overall, stacking is a valuable technique in both astronomy and spectrometry for improving image quality and measurement accuracy.
new6ton
Messages
223
Reaction score
5
I don't know if this is the appropriate forum for it. But I'd like to understand the concept of counts per frame in CCD in visible or IR spectrometry. If the exposure is say 200 ms and there are 12 frames. What is the counts per frame? Do you multiply 200ms by 12 frames to come up with 2400 counts per frame? But the 200 ms is the exposure. How and why is it converted into counts?
 
Engineering news on Phys.org
new6ton said:
I don't know if this is the appropriate forum for it. But I'd like to understand the concept of counts per frame in CCD in visible or IR spectrometry. If the exposure is say 200 ms and there are 12 frames. What is the counts per frame? Do you multiply 200ms by 12 frames to come up with 2400 counts per frame? But the 200 ms is the exposure. How and why is it converted into counts?
Your question isn't clear. The number of counts isn't determined just by the exposure time, but by the measured signal. When you read out the CCD, each pixel has a signal level which is usually calibrated in electrons. Since each photon is created by an incoming photon, knowing the signal level in electrons talls you how many photons landed in each pixel. Then, based on the exposure time, you can determine the incoming flux in photons/second/pixel. Is this what you're trying to do?
 
  • Like
Likes new6ton
phyzguy said:
Your question isn't clear. The number of counts isn't determined just by the exposure time, but by the measured signal. When you read out the CCD, each pixel has a signal level which is usually calibrated in electrons. Since each photon is created by an incoming photon, knowing the signal level in electrons talls you how many photons landed in each pixel. Then, based on the exposure time, you can determine the incoming flux in photons/second/pixel. Is this what you're trying to do?

Ok. In astronomy work with dark sky or when detecting dark matter, the signal intensity or measured signal is the matter. Does it mean if there is only 3 dark matter particles per minute or hour. There is only 3 counts per frame? Irregardless of how long is the exposure (minutes or decades)? This is just an example to emphasize the idea.
 
new6ton said:
Ok. In astronomy work with dark sky or when detecting dark matter, the signal intensity or measured signal is the matter. Does it mean if there is only 3 dark matter particles per minute or hour. There is only 3 counts per frame? Irregardless of how long is the exposure (minutes or decades)? This is just an example to emphasize the idea.
I'm still not understanding your question. In an astronomical image, some pixels will have a lot of light fall on them and be very bright. These will have a large signal count. Other pixels will be dark and have very few counts. The number of counts is a measure of how much light fell on the pixel. if you expose longer, you will get more counts.
 
  • Like
Likes new6ton
phyzguy said:
I'm still not understanding your question. In an astronomical image, some pixels will have a lot of light fall on them and be very bright. These will have a large signal count. Other pixels will be dark and have very few counts. The number of counts is a measure of how much light fell on the pixel. if you expose longer, you will get more counts.

Ok. No problem. I was initially confusing the exposure and the frames.

But why is that the more frames, the better is the image? The exposure is based on how long the CCD is open. But how about the number of frames it is taking? Why does it have effect on noise and even signal?
 
new6ton said:
Ok. No problem. I was initially confusing the exposure and the frames.

But why is that the more frames, the better is the image? The exposure is based on how long the CCD is open. But how about the number of frames it is taking? Why does it have effect on noise and even signal?
OK, I think I can answer this one. You're taking the images through the atmosphere, and the atmosphere is turbulent. So the image dances around. If you take a long exposure, all of this turbulence creates a blurred-out image, which is an average of the "dancing -around" image. If you take many shorter exposures, each one is sharper, but each center point is in a slightly different place. So you use software to align all of these images and compensate for the "dancing-around". This is called stacking, and creates a much sharper image.
 
  • Like
Likes new6ton
phyzguy said:
OK, I think I can answer this one. You're taking the images through the atmosphere, and the atmosphere is turbulent. So the image dances around. If you take a long exposure, all of this turbulence creates a blurred-out image, which is an average of the "dancing -around" image. If you take many shorter exposures, each one is sharper, but each center point is in a slightly different place. So you use software to align all of these images and compensate for the "dancing-around". This is called stacking, and creates a much sharper image.

I guess stacking is also used in the spectral capture in the CCDs of spectrometers (infrared and raman)?
 
new6ton said:
I guess stacking is also used in the spectral capture in the CCDs of spectrometers (infrared and raman)?
Not that I'm aware of. Do you have a reference?
 
phyzguy said:
Not that I'm aware of. Do you have a reference?

It's logic. Spectra from Raman have very intense Rayleigh light that the system is trying to filter. So the image is noisy. It can benefit from stacking, right? Why don't you think so (perhaps it's not present in absorption spectrometers?).

Is there a principle in astronomy stacking where the magnitude of noise decreased as the square root of a number of frames?
 
  • #10
new6ton said:
Is there a principle in astronomy stacking where the magnitude of noise decreased as the square root of a number of frames?

Sounds like a principle in statistics, not astronomy.
 
  • #11
The spectrometer supplier told me this:

"The noise reduces as a square root of number of frames. Say, when measuring the water you get the magnitude of noise 1000 count per frame, and if you set 100 frames (square root of 100 is equal to 10) than you'll get the magnitude of noise equal 1000/10 = 100, i.e. just 100 counts per frame.
https://4nsi.com/blog/2015/10/21/optimizing-image-signal-to-noise-ratio-using-frame-averaging/
This is a general fact for Posson noise https://en.wikipedia.org/wiki/Shot_noise "

He didn't describe it further. Say. Do you use frame averaging in digital camera or smartphone too? What else use frame averaging? Is this not related to stacking images in astronomy? Please share your experiences about them.
 
  • #12
Whether you increase the number of frames by 100 or the exposure time by 100 the signal to noise gets better by 10:

  1. if 100 frames the signal stays same but the averaged noise reduced by 10
  2. if exposure x100 then signal x100 but noise x10
This assumes you are not saturating any detectors.
No fundamental difference otherwise
 
  • #13
Stacking is more use than a much longer exposure time. The explanation, as I understand it, is to do with the range of sensor values - which limits the contrast ratio for any single exposure. Say the sensor has 8 bit resolution and you stack many frames, the effective number of bits per pixel will increase due to the arithmetic. Faint objects will appear 'out of nowhere' because they (coherent signals) become significant compared with the average of random noise but, at the same time, the brightest objects are not burned out..
I am very much a beginner with this business but I have managed to drag the milky way up out of a light polluted sky by using just ten stacked exposures on my SLR. That principle works in many fields of measurement and is basically a way of reducing your system bandwidth.
 
  • #14
Yes I agree...if the long exposure requires you to change digital increment then the multiple exposures will provide "dither" and produce finer differentiation of the signal. This is a digitizing issue but not a fundamental "signal to noise" improvement (in my lexicon!).

And I recommend free from NIH

https://imagej.nih.gov/ij/download.html
 
  • #15
hutchphd said:
not a fundamental "signal to noise" improvement
The noise is suppressed by an averaging process. That is time domain filtering which is a common technique to increases the signal to noise ratio; The spectrum of the noise is changed by the non linear process of stacking on the sequence of single pixel values. The term "dithering" is a modern one and I'm not sure that it does anything but to suppress the visibility of low frequency noise by modulating it up to higher frequencies. But some of these processes are hard to describe in conventional analogue terms.
 
  • #16
  1. Suppose the pixel digitizes on a 4 bit scale 0-15. Let the "true" data value be 10.4.
  2. In the absence of noise, the measurement will be 10 and repeated frames will all yield 10 for that pixel
  3. If random noise of say rms=1 is present then that pixel will sometimes be 10 and sometimes be 11. The average of many frames will be close to 10.4
That is what I was talking about for "dither". It is quite distinct from the typical √n behavior associated with gaussian noise and repeat measurement. I'm uncertain as to whether this is what you were describing for dither.
 
  • #17
hutchphd said:
  1. Suppose the pixel digitizes on a 4 bit scale 0-15. Let the "true" data value be 10.4.
  2. In the absence of noise, the measurement will be 10 and repeated frames will all yield 10 for that pixel
  3. If random noise of say rms=1 is present then that pixel will sometimes be 10 and sometimes be 11. The average of many frames will be close to 10.4
That is what I was talking about for "dither". It is quite distinct from the typical √n behavior associated with gaussian noise and repeat measurement. I'm uncertain as to whether this is what you were describing for dither.
Iirc, dither is a system for introducing low level noise to a low bit sampled signal to spread the quantising noise (which is really a form of distortion) to reduce the subjective effect of the large quantising levels. There may be another application and another use of the term 'dither'.

Your example seems to be for a high signal to noise ratio already. Stacking of images will produce higher values for consistent values (+noise) over you take a signal that's 'true' value is 1.1 with an rms noise of 1 and you stack 100 with a median rule (there are several other options in stacking programs but I haven't tried them), the signal level (level of a few pixels around a star's location) will be 100 and the nearby noisy pixels will 100* the median level which will be about 10. In broad terms that is what I have seen to work on multiple astro images. The resulting image uses more bits per sample than the original image and it's usual to modify the gamma to bring out the fainter stars and crush the brighter ones. The resulting image gives larger dots for originally brighter stars which has the effect of increasing the apparent contrast range.

I don't recall seeing dither as an option - but I may just have missed it.

We could be talking at cross purposes though . . . .
 
  • #18
hutchphd said:
Whether you increase the number of frames by 100 or the exposure time by 100 the signal to noise gets better by 10:

  1. if 100 frames the signal stays same but the averaged noise reduced by 10
  2. if exposure x100 then signal x100 but noise x10
This assumes you are not saturating any detectors.
No fundamental difference otherwise

Is the above also called stacking? So one can say frames in spectroscopy exposures were stacked too?
 
  • #19
sophiecentaur said:
We could be talking at cross purposes though .
I may well be misusing the term although I don't believe I used it without seeing it previously in this context.
My comment shows a simple case where naturally occurring noise actually improves the result because it allows precision greater than digitization granularity. The inherent signal to noise does not matter...only that the noise be "at least as large" as the digitization step. I believe we are talking about the same effect...mine was the simplest application.
 
  • #20
new6ton said:
s the above also called stacking? So one can say frames in spectroscopy exposures were stacked too?
By stacking I mean averaging images on a pixel by pixel basis. The question was the possible differences between a single 100s exposure vs a stack of 100 exposures of 1s. My contention is that fundamental signal to noise expectations are the same. There are interesting deiails!
 
  • #21
hutchphd said:
possible differences between a single 100s exposure vs a stack of 100 exposures of 1s. My contention is that fundamental signal to noise expectations are the same.
Let us try this view.

  • The stars tend to be in the same place on the surface of the imager, when they are not, the stacking process tries to shift or warp the image make them so.
  • The noise is random, with some small percentage of pixels generating an extraneous signal at any given time.
  • With a single long exposure, the number of pixels generating noise during the exposure interval increases with time, tending towards a uniform background or fog.
  • The stacking process tends to average the value of each pixel over the set of exposures.

With the above process, since any individual noise pixel is usually off, the average noise value assigned to a given pixel is proportional to 1/√N, where N = {No. of Frames}.

The pixels containing star images, being On for most of the frames, are thus averaged to much higher values.

The above applies to noise generated in the imager (CCD) and in the analog processing string following it. It also applies to things like an airplane crossing the field of view. It does not apply to static artifacts like sky glow, city lights illuminating clouds, or a dirty lens or CCD.

Hope this helps a bit.

Cheers,
Tom
 
  • #22
hutchphd said:
My comment shows a simple case where naturally occurring noise actually improves the result because it allows precision greater than digitization granularity.
I see we are talking about the same application of dither. But that is not the same application as when the noise level is several quantising levels (as in astrophotography) and the requirement of the image is not necessarily precision but actual detection of a low level feature. The stacking process requires a greater number of quantisation levels in the calculation in order to achieve the improvement. But the resulting image can be 'power stretched' to be handled by the display or for aesthetics.
Tom.G said:
Let us try this view.

  • The stars tend to be in the same place on the surface of the imager, when they are not, the stacking process tries to shift or warp the image make them so.
  • The noise is random, with some small percentage of pixels generating an extraneous signal at any given time.
  • With a single long exposure, the number of pixels generating noise during the exposure interval increases with time, tending towards a uniform background or fog.
  • The stacking process tends to average the value of each pixel over the set of exposures.

With the above process, since any individual noise pixel is usually off, the average noise value assigned to a given pixel is proportional to 1/√N, where N = {No. of Frames}.

The pixels containing star images, being On for most of the frames, are thus averaged to much higher values.

The above applies to noise generated in the imager (CCD) and in the analog processing string following it. It also applies to things like an airplane crossing the field of view. It does not apply to static artifacts like sky glow, city lights illuminating clouds, or a dirty lens or CCD.

Hope this helps a bit.

Cheers,
Tom
I'd agree with most of that. There are two other issues. The achievable dynamic range / bit depth will be greater with stacking than with long exposures (ten times for a stack of ten images). Also, a long exposure achieves just one averaging process (rms - or whatever ) whereas stacking of each pixel can use whatever algorithm best suits the noise characteristic of the electronics and other varying inputs. As you say, the effect of spreading of the star images is also a factor (star trails and atmospheric distortions). Guiding can help with that (and also, registration algorithms can be applied to each 'subframe'.
I have yet to understand how this applies to situations with very low level signals, down at the threshold of the sensor. (I do not have any cooling on my cheapo cameras)
 
  • #23
sophiecentaur said:
Stacking is more use than a much longer exposure time. The explanation, as I understand it, is to do with the range of sensor values - which limits the contrast ratio for any single exposure. Say the sensor has 8 bit resolution and you stack many frames, the effective number of bits per pixel will increase due to the arithmetic. Faint objects will appear 'out of nowhere' because they (coherent signals) become significant compared with the average of random noise but, at the same time, the brightest objects are not burned out..

Not only does stacking help prevent 'burnout' of parts of the image that are very bright (such as very bright stars), it also means that anything that affects the camera/telescope during the exposure process can be discarded without completely throwing out all the data. If I bump my telescope while imaging, I just throw out that single image and use the other hundred.

sophiecentaur said:
The term "dithering" is a modern one and I'm not sure that it does anything but to suppress the visibility of low frequency noise by modulating it up to higher frequencies.

I don't know the technical explanation for using dithering, but I've always used it to ensure that sensor artifacts aren't impressed into the final image. Any sensor will have pixels that are less responsive than others or spots where the thermal noise is much higher than average. Dithering spreads these spots out in your final image so that they don't show up. Even stacking and lots of image processing can still leave hints of these artifacts in the final image if you don't dither.

hutchphd said:
  1. Suppose the pixel digitizes on a 4 bit scale 0-15. Let the "true" data value be 10.4.
  2. In the absence of noise, the measurement will be 10 and repeated frames will all yield 10 for that pixel
  3. If random noise of say rms=1 is present then that pixel will sometimes be 10 and sometimes be 11. The average of many frames will be close to 10.4
That is what I was talking about for "dither". It is quite distinct from the typical √n behavior associated with gaussian noise and repeat measurement. I'm uncertain as to whether this is what you were describing for dither.

If your 'true' value is 10.4, then that requires that, eventually, the detector detects more than 10 photons, since light itself is quantized. So some portion of your frames will have 11 counts instead of 10, even in our perfects scenario here and the average of many frames would give you something approaching 10.4 counts.

In addition, if you mean that the original signal has RMS=1, then remember that noise means that the pixel is sometimes higher than average and sometimes lower than average. If you take a perfect signal of 10 and add 1 RMS of noise to it, the average is still 10. For it to be 11 you'd have to take a 1 count signal with 1 rms noise in that new signal and add it to the image. Noise is just a random variation, so adding noise doesn't add any signal.

sophiecentaur said:
The achievable dynamic range / bit depth will be greater with stacking than with long exposures (ten times for a stack of ten images). Also, a long exposure achieves just one averaging process (rms - or whatever ) whereas stacking of each pixel can use whatever algorithm best suits the noise characteristic of the electronics and other varying inputs. As you say, the effect of spreading of the star images is also a factor (star trails and atmospheric distortions). Guiding can help with that (and also, registration algorithms can be applied to each 'subframe'.
I have yet to understand how this applies to situations with very low level signals, down at the threshold of the sensor. (I do not have any cooling on my cheapo cameras)

From a purely statistical standpoint, stacking frames instead of taking one long exposure is actually worse for getting your SNR up and picking out those very faint details that have inherently very low SNR. This is because the sensor itself adds noise during readout. Gathering light over a single long exposure can push the signal of that faint detail up however far you want (given sensor and time limitations and such) before adding in this extra noise. So if your readout noise is 5 rms, a single exposure might bring the signal of your faint target up high enough to be clearly seen over this 5 rms noise. But stacking many shorter exposures means that your target is always buried in the noise. In other words, a single long exposure keeps the readout noise set while increasing the signal, while stacking short exposures subjects the noise to the square-root improvement law.

Imagine we have perfect, noise-free images. In our long exposure, the signal of the faint target is 1000, while in our short exposure our target's signal is 10. Now let's add our 5 rms of noise to the image, giving us 1000 with 5 rms readout noise and 10 with 5 rms readout noise. The noise is half the signal in the short exposures, but no biggie, we should be able to average out the noise, right?

To get our total signal up to the 1000 we need to match our single long exposure we need to stack 100 short exposures. Since the signal here is noise free, we just average them and we still get 10. For the readout noise, we get ##σ=\sqrt{\frac{R^2}{n}} = \sqrt{\frac{5^2}{100}}=\sqrt{\frac{25}{100}}=0.5## Our short exposures average out to give us a signal of 10 with 0.5 rms of noise.

To compare our long exposure we simply divide both the signal and the noise by 100, giving us a signal of 10 with 0.05 rms noise. In terms of SNR, our long exposure has an SNR of 200, while our stacked image has an SNR of 20. So taking one long exposure has, in this case, resulted in a factor of 10 improvement in the SNR of the target over stacking shorter images, despite the total imaging time being identical. Obviously the real benefit will be less due to other sources of noise.

The reason we stack images is because of all the other benefits we gain over taking long exposures, some of which have already been discussed here.
 
  • #24
new6ton said:
Is the above also called stacking? So one can say frames in spectroscopy exposures were stacked too?

For astronomy, almost certainly given that many targets are extremely dim and you're spreading this dim light out even further into a spectrum.
 
  • #25
Drakkith said:
In addition, if you mean that the original signal has RMS=1, then remember that noise means that the pixel is sometimes higher than average and sometimes lower than average. If you take a perfect signal of 10 and add 1 RMS of noise to it, the average is still 10. For it to be 11 you'd have to take a 1 count signal with 1 rms noise in that new signal and add it to the image. Noise is just a random variation, so adding noise doesn't add any signal.

Perhaps you misunderstand the premise. I am not talking photon counting but digitization What if the "perfect" signal corresponds to 10.4 digitization units. Then the perfect noiseless camera will always report 10 for that pixel and averaging will not help. Add in some noise and the average will get closer to 10.4 as previously explained.

Photon counting is a different subject but I have also done that for fluorescence work. A cooled CCD is a remarkable tool!
 
  • #26
Drakkith said:
Imagine we have perfect, noise-free images. In our long exposure, the signal of the faint target is 1000, while in our short exposure our target's signal is 10. Now let's add our 5 rms of noise to the image, giving us 1000 with 5 rms readout noise and 10 with 5 rms readout noise. The noise is half the signal in the short exposures, but no biggie, we should be able to average out the noise, right?
This is an incorrect analysis. In the real world the noise signal for a long exposure will grow as √(exposure time) and so the result will be the same.

There is no free lunch either way
 
  • #27
sophiecentaur said:
The stacking process requires a greater number of quantisation levels in the calculation in order to achieve the improvement. But the resulting image can be 'power stretched' to be handled by the display or for aesthetics.
Yes. The eyeball is decidedly nonlinear and the stacking gives you much more flexibility. I think we agree completely.
 
  • #28
hutchphd said:
Perhaps you misunderstand the premise. I am not talking photon counting but digitization What if the "perfect" signal corresponds to 10.4 digitization units. Then the perfect noiseless camera will always report 10 for that pixel and averaging will not help. Add in some noise and the average will get closer to 10.4 as previously explained.

Even assuming a continuous (non-quantized) signal that is always clipped to 10, you still won't get 10.4 with noise. If your signal is 10, and you add any amount of noise, then the average signal over many frames is still 10. As I said above, this is because noise is a random variation that takes the signal both above and below the average value. This is why averaging frames even works. Over many frames, the high values and the low values cancel each other out and the average approaches the 'true' value.

The only exception is if you're noise is large enough such that the spread of values gets clipped when trying to go under zero. Then the average would be larger because the low values that would cancel out the high values have been clipped (and I'm not even sure this is a realistic scenario for real equipment).
 
  • #29
hutchphd said:
This is an incorrect analysis. In the real world the noise signal for a long exposure will grow as √(exposure time) and so the result will be the same.

There's nothing incorrect about it. The assumption was that we had perfect noise-free signals and we were adding in readout noise. Readout noise is not time-dependent so it isn't affected by exposure time.
 
  • #30
This is not that difficult.
  1. the actual signal is 10.4
  2. with noise the signal is sometimes 11.1234567
  3. this will be reported as 11
  4. the average will be >10
QED?
 
  • #31
hutchphd said:
This is not that difficult.
  1. the actual signal is 10.4
  2. with noise the signal is sometimes 11.1234567
  3. this will be reported as 11
  4. the average will be >10
QED?

What about when the signal is sometimes 8.79? Or 9.3? Or 6.5?
 
  • #32
Drakkith said:
There's nothing incorrect about it. The assumption was that we had perfect noise-free signals and we were adding in readout noise. Readout noise is not time-dependent so it isn't affected by exposure time.
OK then the example is correct but irrelevant. In the real world the noise grows with √time
 
  • #33
Drakkith said:
What about when the signal is sometimes 8.79? Or 9.3? Or 6.5?
You said the average would always be 10. It is not
 
  • #34
hutchphd said:
OK then the example is correct but irrelevant. In the real world the noise grows with √time

It is entirely relevant because it illustrates how readout noise affects the quality of an image. In general, it is better to take longer exposures than shorter exposures all else being equal. Otherwise the readout noise makes a larger contribution to the image than necessary, degrading image quality.

You are correct that the noise in the real-world signal grows as the square root of the exposure time, and the distinction between how readout noise scales (i.e. it doesn't) and how noise in the signal scales is the entire point of that section of my post!

hutchphd said:
You said the average would always be 10. It is not

Why not?
 
  • #35
Drakkith said:
In general, it is better to take longer exposures than shorter exposures all else being equal.
The operative thing here is "all else being equal". The contrast range is always limited by the peak signal level from the sensor and the quantising step size. If we are talking Astrophotography, we would (almost) never stop down our expensive telescope aperture so when Vega burns out our image, we have exposed for too long. The best exposure time is where the brightest object in the frame is near 'FF' and that dictates the faintest nearby object that you will detect. Stacking will give you something like the equivalent of ten times the exposure time without burning out Vega.
At the same time the faint coherent objects will appear at ten times the level of a single exposure and the noise and other changing quantities will be averaged out by some algorithm.
Aeroplanes and the like can also be dealt with - sometimes very well if the algorithm is smart enough. A long exposure will just leave you with the plane's streak at around 1/10 brightness which is very visible.
Drakkith said:
Any sensor will have pixels that are less responsive than others or spots where the thermal noise is much higher than average.
Isn't that best dealt with using 'Flats' (to introduce a further nerdy idea)? And Dark frames help too, of course.
 
  • #36
sophiecentaur said:
The best exposure time is where the brightest object in the frame is near 'FF' and that dictates the faintest nearby object that you will detect. Stacking will give you something like the equivalent of ten times the exposure time without burning out Vega.

Indeed. You can stack to get whatever equivalency you want. And you can also let certain parts of your image burn out if you don't care about the details of whatever is imaging onto those pixels. So we can let Vega burn out in the core if we want and keep right on imaging. This is especially true if your CCD has anti-blooming capability.

sophiecentaur said:
Aeroplanes and the like can also be dealt with - sometimes very well if the algorithm is smart enough. A long exposure will just leave you with the plane's streak at around 1/10 brightness which is very visible.

Sigma-clip averaging is also an option. In this case the algorithm looks at those pixels, determines that they are outside a 2-3 sigma range of the average, and simply gets rid of them so they aren't averaged into the final image.

sophiecentaur said:
Isn't that best dealt with using 'Flats' (to introduce a further nerdy idea)?

Flats and darks can help, but they still don't get rid of it completely. For small deviations you're usually just fine with flats and darks, but if you have pixels which are significantly off from the norm then you'll usually do well from dithering.

Note that dithering in astrophotography might have a different meaning than in other areas. See https://www.skyandtelescope.com/astronomy-blogs/astrophotography-jerry-lodriguss/why-how-dither-astro-images/ on dithering and why it is used.
 
  • #37
Drakkith said:
Why not?
We must somehow be talking past each other on this. Let me give it one more try. Let me take n measurements
  1. Suppose my source emits power of 10.499 units
  2. Suppose my digitizer rounds off to whole numbers of that unit
  3. If the RMS=0, then the system will always report "10" and I will be in error by ~ 0.5 no matter how I average
  4. If the RMS ~1 then the system will randomly report either 10 or 11 in equal number as n gets large and so I will report ~10.5 as my result.
And so the average is made better than the binning error because of the noise. The general case is obvious I trust.
 
  • #38
Drakkith said:
Note that dithering in astrophotography might have a different meaning than in other areas.
= actually shaking the scope or sensor (spatial domain). I have seen it as an additional option in guiding software but ignored it so far. One of these days I may have a go.

Dithering in digital processing is done by adding to the signal in the time domain.

'Divided by a common language' :smile:
 
  • Like
Likes hutchphd
  • #39
So dithering is "shaky" stacking...
 
  • Like
Likes sophiecentaur
  • #40
hutchphd said:
So dithering is "shaky" stacking...
But wouldn't the subs all be re-registered by the software before stacking? Nebulosity is a program that I have spent a bit of time on and the workflow seems to involve registering first and then stacking.
 
  • #41
Drakkith said:
What about when the signal is sometimes 8.79? Or 9.3? Or 6.5?
That would be a common occurrence when the seeing is not too good. The level of readout noise would not, perhaps, relate to exposure length in a simple way but the other noise sources are 'linear with time'(?) and the benefit of exposure time would be √n based.
I have been reading all the obvious sources from Google and also, speaking from experience, the noise in the image from a sensor tends to be greater than just one quantisation level. ON my CMOS DSLR and my ZWO cheapy can be brought up by changing the gain and stacking will take it down below the levels of faint stars. Stacking works when a longer exposure will just burn out some stars.
 
  • #42
hutchphd said:
If the RMS ~1 then the system will randomly report either 10 or 11 in equal number as n gets large and so I will report ~10.5 as my result.

Okay, I think I get what you're saying now. I was stuck on your explanation that the noise meant that values were always added to the signal, which I still don't think is right, but it looks like even accounting for that you're still right about what the signal would be. Check the following:

If the signal has a value of ##\dot x## then an individual sample of the signal is ##\dot x ± \sqrt{\dot x}## where ##\dot x## represents the mean value of the signal and ##\sqrt{\dot x}## represents the standard deviation.

Given a mean signal value of 10.499, any particular sample (prior to digitization) may be larger or smaller, and we would expect the samples to follow a normal distribution where roughly two-thirds of the of the samples (68.3%) lie within ##±\sqrt{10.499}## of the signal. The other third of the time the samples should be even further away from the mean. Since this deviation is symmetrical about the mean value, we can expect to find half of the samples above the mean and half the samples below.

With enough samples the average would approach 10.499 prior to digitization. But, nearly half the time the value is 10.5 or above and is rounded up to 11 (or greater) during digitization. Averaging would give you roughly the right answer for the signal.

This reminds me of how they found out in WW2 that their mechanical computing devices used on bombers were actually more accurate in flight than on the ground. It turned out that the vibrations were helping to keep gears from sticking and whatnot, resulting in more accuracy.

hutchphd said:
So dithering is "shaky" stacking...

That's a succinct way of putting it.
 
  • #43
sophiecentaur said:
But wouldn't the subs all be re-registered by the software before stacking? Nebulosity is a program that I have spent a bit of time on and the workflow seems to involve registering first and then stacking.

Sure. And that's why dithering is used. Dithering just ensures that each sub is in a slightly different position so that any bad pixels or spurious signals are spread over multiple pixels instead of their inaccuracy just stacking up all on the same pixel, as would happen if you had near-perfect guidance and no dithering.
 
  • Like
Likes sophiecentaur
  • #44
hutchphd said:
By stacking I mean averaging images on a pixel by pixel basis. The question was the possible differences between a single 100s exposure vs a stack of 100 exposures of 1s. My contention is that fundamental signal to noise expectations are the same. There are interesting deiails!

I think there is difference between single 100s exposure vs a stack of 100 exposures of 1s. Here is why.

If the exposure is too fast. You don't get enough signal over noise. So even if you make 100 frames of it. You won't get the proper signal! There must be at least the minimum exposure before the comparison is even valid.

Agreements, objections anyone?
 
  • #45
new6ton said:
If the exposure is too fast. You don't get enough signal over noise. So even if you make 100 frames of it. You won't get the proper signal! There must be at least the minimum exposure before the comparison is even valid.

A stack of 100 one-second exposures has the exact same amount of signal as a single 100-second exposure. The difference is that there are sources of noise that are not time dependent, and these can overwhelm a very short exposure with a very small signal. But, if the signal is very high, such as filming video outside during the daytime, you're just fine. This noise is negligible compared to the signal and the difference between the stack of short exposures and the single long exposure is minimal.
 
  • Like
Likes hutchphd
  • #46
I agree. To reiterate; if the noise is coherent (I often call this "error" not "noise") then multiple exposures will allow a clever scheme to subtract it off. If the noise is random you are no better off (assuming the long exposure doesn't saturate the detector somehow).

Drakkith said:
With enough samples the average would approach 10.499 prior to digitization. But, nearly half the time the value is 10.5 or above and is rounded up to 11 (or greater) during digitization. Averaging would give you roughly the right answer for the signal.
Yes! I'm a little surprised you haven't run into this before...I think it is an interesting result. Thanks for the response.
 
  • Like
Likes sophiecentaur
  • #47
hutchphd said:
Yes! I'm a little surprised you haven't run into this before...

Never had any reason to look into it before now. 😉
 
  • #48
Drakkith said:
This reminds me of how they found out in WW2 that their mechanical computing devices used on bombers were actually more accurate in flight than on the ground. It turned out that the vibrations were helping to keep gears from sticking and whatnot, resulting in more accuracy.
I must take my washing machine out in the garden and run it next to my mount at fast spin. :wink:
 
  • Like
Likes Drakkith
  • #49
hutchphd said:
(assuming the long exposure doesn't saturate the detector somehow).
Isn't that the point, though? When people are after impressive photos (not necessarily the pure data), they will often want to show faint features in the proximity of a bright object. Then, the resulting increased bit depth of stacked images can give that option by tinkering with the image gamma. The advice I have read is to use as high gain (aka 'iso') as possible and that can bring the histogram maximum well up the range. Natch, you have to be careful about burnout on a short subframe but you then get a factor of ten to a hundred headroom protection.
I do take your point about coherent 'interference' vs noise and that is 'all good' because you have already got the benefit of an effectively longer exposure for random effects. Smart processing can achieve such a lot.
 
  • #50
Drakkith said:
A stack of 100 one-second exposures has the exact same amount of signal as a single 100-second exposure.
That's true but, without a significant 'light bucket' scope, you would never be using 1s. You'd be using the sort of individual exposure length that is near clipping (assuming your tracking / guiding is up to it.
It's such a multifaceted business and I find the brain works at only ten percent when out in the cold and dark, after getting the equipment all together and aligned. Yes - I know - get an observatory - not really practical for me.
 
Back
Top