sophiecentaur said:
So the apparent visibility of this artefact is about the same as the visibility of the faintest stars in the picture. So, could that mean that the contrast range that the sensor can reproduce (for pixels which are close together) is only about 1.75%? OR could it be some extra characteristic imperfection of the lens (supposed to be ED Apochromatic)?
The original image was dng and the artefact was visible on PS, before being turned into jpeg.
I've gone around and around on this question, and manage to confuse myself every time, so I'll reply as best I can and highlight where I fall short:
The question is simple: What is the maximum obtainable dynamic range of my (stacked and processed) image? In other words, what is the maximum range of magnitudes I can obtain in a single (stacked and processed) image?
It comes down to the noise floor and the number of bits available (discretization of the signal). Let's just consider a single channel to make this 'simple'. Displays are 8-bits and RAW are (say) 14-bits. This translates to, if the signal is 100% full scale at magnitude 0 (Vega), a signal-to-noise ratio of 1 is achieved at magnitude 6 (8-bits) and 10.5 (RAW). This assumes there is no noise in the system- the faintest signal has intensity value of '1' in either case. If we say there is background light, thermal noise, etc. etc., and say SNR = 1 at 5% full scale (which is way better than I get), the minimum magnitudes are about the same.
Caution: I'm not sure I believe this. It's not clear to me why adding bits would seem to make the sensor 'more sensitive', because the incoming intensity hasn't changed. It seems to be that I can more finely reject the noise floor with more bits available.
Stacking and averaging many frames generates an image with more bits- the most I have ever generated is a 24-bit image (1024 14-bit images). This, according to the above, means I have an image that can potentially span 18 magnitudes, which seems to strain credibility. That said, I can reliably obtain clear images of magnitude 15 stars once I have accumulated about 300 images (22 bit images, and the calculation returns 16.5 magnitude floor).
So as a practical matter, it seems that I can generate images containing a range of up to 18 stellar magnitudes. There's obviously a bottom end based on the received intensity, but I have a really hard time calculating it, even starting with '1 photon per frame'.
The artifacts occur when I 'squish' the 22-bit image into 8 bits. Ideally, I'd like to map the 18-magnitude span (brightness ratio 1:14552000) into a 1:255 scale, but as you can see, there will be artifacts. One critical consideration to minimize the 'skirt' is to avoid clipping at both the top and bottom keep your noise floor just barely above 'zero' and only a few of the brightest stars should be at 100% scale.
Imaging Orion is particularly difficult due to the dynamic range present- I typically have to choose between blowing out the Trapezium or not getting the full glorious nebula.
Does that help?