sophiecentaur said:
Stacking is more use than a much longer exposure time. The explanation, as I understand it, is to do with the range of sensor values - which limits the contrast ratio for any single exposure. Say the sensor has 8 bit resolution and you stack many frames, the effective number of bits per pixel will increase due to the arithmetic. Faint objects will appear 'out of nowhere' because they (coherent signals) become significant compared with the average of random noise but, at the same time, the brightest objects are not burned out..
Not only does stacking help prevent 'burnout' of parts of the image that are very bright (such as very bright stars), it also means that anything that affects the camera/telescope during the exposure process can be discarded without completely throwing out all the data. If I bump my telescope while imaging, I just throw out that single image and use the other hundred.
sophiecentaur said:
The term "dithering" is a modern one and I'm not sure that it does anything but to suppress the visibility of low frequency noise by modulating it up to higher frequencies.
I don't know the technical explanation for using dithering, but I've always used it to ensure that sensor artifacts aren't impressed into the final image. Any sensor will have pixels that are less responsive than others or spots where the thermal noise is much higher than average. Dithering spreads these spots out in your final image so that they don't show up. Even stacking and lots of image processing can still leave hints of these artifacts in the final image if you don't dither.
hutchphd said:
- Suppose the pixel digitizes on a 4 bit scale 0-15. Let the "true" data value be 10.4.
- In the absence of noise, the measurement will be 10 and repeated frames will all yield 10 for that pixel
- If random noise of say rms=1 is present then that pixel will sometimes be 10 and sometimes be 11. The average of many frames will be close to 10.4
That is what I was talking about for "dither". It is quite distinct from the typical √n behavior associated with gaussian noise and repeat measurement. I'm uncertain as to whether this is what you were describing for dither.
If your 'true' value is 10.4, then that requires that, eventually, the detector detects more than 10 photons, since light itself is quantized. So some portion of your frames will have 11 counts instead of 10, even in our perfects scenario here and the average of many frames would give you something approaching 10.4 counts.
In addition, if you mean that the original signal has RMS=1, then remember that noise means that the pixel is sometimes higher than average and sometimes
lower than average. If you take a perfect signal of 10 and add 1 RMS of noise to it, the average is still 10. For it to be 11 you'd have to take a 1 count signal with 1 rms noise
in that new signal and add it to the image. Noise is just a random variation, so adding noise doesn't add any signal.
sophiecentaur said:
The achievable dynamic range / bit depth will be greater with stacking than with long exposures (ten times for a stack of ten images). Also, a long exposure achieves just one averaging process (rms - or whatever ) whereas stacking of each pixel can use whatever algorithm best suits the noise characteristic of the electronics and other varying inputs. As you say, the effect of spreading of the star images is also a factor (star trails and atmospheric distortions). Guiding can help with that (and also, registration algorithms can be applied to each 'subframe'.
I have yet to understand how this applies to situations with very low level signals, down at the threshold of the sensor. (I do not have any cooling on my cheapo cameras)
From a purely statistical standpoint, stacking frames instead of taking one long exposure is actually worse for getting your SNR up and picking out those very faint details that have inherently very low SNR. This is because the sensor itself adds noise during readout. Gathering light over a single long exposure can push the signal of that faint detail up however far you want (given sensor and time limitations and such) before adding in this extra noise. So if your readout noise is 5 rms, a single exposure might bring the signal of your faint target up high enough to be clearly seen over this 5 rms noise. But stacking many shorter exposures means that your target is always buried in the noise. In other words, a single long exposure keeps the readout noise set while increasing the signal, while stacking short exposures subjects the noise to the square-root improvement law.
Imagine we have perfect, noise-free images. In our long exposure, the signal of the faint target is 1000, while in our short exposure our target's signal is 10. Now let's add our 5 rms of noise to the image, giving us 1000 with 5 rms readout noise and 10 with 5 rms readout noise. The noise is half the signal in the short exposures, but no biggie, we should be able to average out the noise, right?
To get our total signal up to the 1000 we need to match our single long exposure we need to stack 100 short exposures. Since the signal here is noise free, we just average them and we still get 10. For the readout noise, we get ##σ=\sqrt{\frac{R^2}{n}} = \sqrt{\frac{5^2}{100}}=\sqrt{\frac{25}{100}}=0.5## Our short exposures average out to give us a signal of 10 with 0.5 rms of noise.
To compare our long exposure we simply divide both the signal and the noise by 100, giving us a signal of 10 with 0.05 rms noise. In terms of SNR, our long exposure has an SNR of 200, while our stacked image has an SNR of 20. So taking one long exposure has, in this case, resulted in a factor of 10 improvement in the SNR of the target over stacking shorter images, despite the total imaging time being identical. Obviously the real benefit will be less due to other sources of noise.
The reason we stack images is because of all the other benefits we gain over taking long exposures, some of which have already been discussed here.