sophiecentaur said:
This is the numb of the problem . Even the JWST is working in marginal conditions, even if the margins have been changed a lot. If they were operating the scope within those margins then they would be wasting many billions of dollars.
@collinsmark seems to be insisting that the limited model of his maths is all that needs to be considered but, in the limited situation of a 16 bit sensor and the presence of many other interfering low level sources there is a very real limit to how far the maths will follow reality. This is not a problem but it's what limits what we can see.
I've said this before, but I'll try to say it again in different words. The bit depth of the sensor hardware is
not the end-all-be-all of the overall bit depth that the camera is capable of achieving. You can increase the effective bit depth to some degree (sometimes to a very significant degree) though the process of stacking multiple subframes.
I think it's best now that I explain with examples. I'll start with some real-world examples and finish with a hypothetical, yet extreme example.
Here's an image of Mars that took with my backyard telescope, a couple of years ago:
The color resolution (bit depth) seems very smooth, does it not? However, the image was taken with an 8 bit sensor! The sensor only had 256 levels. Yet look at the final image. The final image is way, way higher level resolution than 8 bits. How did I do that? Integration. The final image was composed by integrating thousands of individual subframes.
Check out this Hubble Ultra Deep Field image (I didn't take this one; Hubble Space Telescope [HST] did):
HST pointed to very, very unpopulated (unpopulated by nearby stars, nebula, etc.) patch of sky, taking many individual, 20 minute subframes and stacking them to achieve this final image.
One might say, "that's impossible, all those galaxies would just blend into the background sky glow." Well, no, it's
not impossible. As HST shows us here, it
is possible. As I've said in a previous post, the standard deviation of the background sky glow tends toward zero with increased total integration time, and thus the background sky glow can be subtracted out.
One might also say, "that's impossible. The detail in objects so dim would be less than 1 ADU of the bit-depth of HST's sensor." Sure, some of the detail
was less than 1 ADU of the sensor for a single, 20 minute exposure, but HST gained more bit depth by integrating many individual sub-exposures.
The image was taken with 4 different filters. All subexposures were approximately 20 minutes each. For the two shorter wavelength filters, 112 individual subframes were stacked for each filter. For the higher wavelength filters, 288 individual subframes were stacked for each filter.
That makes for a total integration time of over 11 days.
And no, integrating a whole 11 days worth of subframes to produce the "Hubble Ultra Deep Field" image, instead of settling on a single, 20 minute exposure, is not a waste of many billions of dollars.
I'm not just pulling math out of my butt. This is how real-world science is done. Right here.
------------
Now for a hypothetical, extreme example. Consider a 1 bit camera. Each pixel can represent either an on or off.
For the purposes of this example, assume that the camera has a high quantum efficiency and the camera is operating near unity gain, ~1 \mathrm{e^-}/ADU. Also, for this hypothetical example, assume the sensor's read noise is small.
Now, put the camera on a tripod and point it at your favorite sleeping kitten, where there are both bright and dark regions (maybe the cat is sleeping in a ray of sunlight from the window). Adjust the exposure time such that some pixels are consistenly black over many different subframes, some pixlels are consistently white over many different subfames, and the rest of the pixels randomly alternate between black and white from one exposure to the next, to some varying extent from pixel location to pixel location.
Now take and record 255 separate subexposures. You'll find that when analyzing the data, in the really dark regions, some pixels are black in all 255 subframes. But a few pixels are white in 1 of the 255 frames. Moving to a slightly brighter region, there are pixel locations that are white in 2 of the 255 frames. On the really bright regions, some pixels are white in all frames. But some pixels nearby are white in only 254 of the 255 frames. Others nearby are white in only 253 of the frames. In the regions that are neutral brightness, the number of white pixels seem to be consistently around 46 out of the 255 subframes.
Now sum (or average, if you store your data in floating point format) each pixel location over all 255 subframes. Blam! you've got yourself an image with a bit depth of 8 bits. You started with a camera with only 1 bit, and now you have an 8 bit image. There's black, there's white, and 256 levels of grey, total (0 to 255).
Sure, this particular image suffers quite a bit from shot noise, but you can reduce the shot noise by integrating further, and producing an image with a bit depth greater than 8 bits as a byproduct. You'll even find that by doing so, you can eke out more detail in the shadows that were previously, consistently all black.
Isn't math neat?