PhysicoRaj said:
We have ways to take out read noise, like dithering and calibration, right? If read noise can be effectively eliminated, the advantages of short-enough subframes become more obvious.
Read noise
cannot be eliminated, unfortunately. Boy, howdy that would be nice. But no.
There are two aspects of read noise of interest:
mean (i.e., "average") and
standard deviation. Calibration can help with the mean, but is helpless regarding the standard deviation.
Dark frames are a way to characterize the mean of the
read-noise + thermal noise. Bias frames concentrate specifically on the mean of the read noise (no thermal noise). Dark frame subtraction and/or bias frame subtraction can effectively eliminate the
mean of the read noise. Neither of which can be used to eliminate the standard deviation of the noise, however.
Dithering is way to combat any imperfections of your calibration by shifting the signal around a little in the spatial domain (shifting it to nearby, but different, pixels). Again though, it does nothing in terms of
eliminating the residual noise, rather it just
smears the residual noise, spatially.
For a given camera gain (ISO setting for DSLRs) each pixel of each sub-frame will receive noise with roughly constant mean, \mu_r, and constant standard deviation, \sigma_r. As mentioned earlier, the
mean part of the read noise, \mu_r, can be effectively eliminated via calibration, but the part of the noise represented by the standard deviation just accumulates with more and more sub-frames: it accumulates proportionally with the square root of the number of sub-frames. (It's true that when averaging sub-frames [i.e., stacking], the effective read noise will be reduced by \frac{\sigma_r \sqrt{N}}{N} = \frac{\sigma_r}{\sqrt{N}}, implying the more frames the better. But don't be mislead into reducing the exposure time of sub-frames for the sole reason of having more sub-frames. Remember, the signal strength of each sub-frame is proportional to sub-frame exposure time. So by reducing the sub-frame exposure time you're also reducing the signal strength of each subframe, thus reducing the signal to noise ratio of each sub-frame).
Recall that the standard deviations of other forms of noise, such as thermal noise or light pollution, are independent of the number of sub-frames. These types of noise accumulate with the square root of time, whether or not you take many short exposures or one big, long exposure. You can combat the
mean of the thermal noise with dark frame subtraction, but you can't do squat about the standard deviation (for a given exposure time and temperature). And you can't do anything at all about the light pollution (except by reducing it from the get-go by traveling to a darker location or, to some degree, by using filters).
So when determining your "optimal" exposure time for sub-frames, your goal is to increase your exposure time such that the standard deviation of thermal noise + light pollution (technically this is \sqrt{\sigma_{th}^2 + \sigma_{lp}^2}) is roughly greater than or equal to the standard deviation of the read noise, for a single sub-frame.
In other words, increase the sub-frame exposure time such that read noise is not the dominant noise source.
Btw, all this applies to deep-sky astrophotography targets. Planetary targets using lucky-imaging techniques are a different beast. In planetary astrophotography, you'll gladly sacrifice some signal strength of sub-frames to gain many, clear snapshots of the target, warped and noisy as they may be, to combat atmospheric seeing. In planetary astrophotography, it's understood that read noise is the dominant noise source, by far.
[Edit: added boldface to the line summarizing my main point.]