davenn said:
your idea of noise in an image and my idea seem to be very different
I really can't figure where you are going ?
Mine comes from my book on astronomical image processing and is more technical than what I've usually seen describe noise.
Imagine taking two images of the same starfield. These two images have identical exposure times, identical filtering, were taken with the same camera, the same software, at the same location, etc. If you were to inspect every pixel on both images you find that these pixels are
not identical. The number of electrons counted from each pixel on one image is slightly different than the number of electrons from corresponding pixels on the other image (I say electrons and not photons because that's what's physically being counted by the detector). If you were to take a third image, you would find that, again, all of the pixels have slightly different values. This variance is noise and while you cannot predict the exact value a pixel will have between successive images, it follows a certain statistical pattern, namely that the noise varies as the square root of the signal.
That signal could be the actual photons being captured by the sensor (either from your target or from stray or unwanted light), the electrons generated in the sensor by dark current, or the electrons generated by the onboard electronics. All of these things serve as 'signals' and all contribute their own noise to the resulting image. Basically anything that generates electrons in the detector is a signal.
Subtracting dark frames, flat frames, or bias frames does not subtract the noise added to the image from dark current, from bias, or from the flat, dark, and bias frames taken to do the subtraction. The reason that these are taken and subtracted is to remove the
signal from each of these sources, in addition to fixing hot/cold pixels. That leaves you with, ideally, the signal from your target, the signal from the background and ambient light, and the noise from all of the sources.
davenn said:
noise is noise is noise, regardless of how it is generated and the processes mentioned do lots to remove those various types of noise.
Taking multiple exposures and adding/averaging them together does the same thing that taking a longer exposure would do. It increases the signal to noise ratio. If we examine the same pixel from images taken of the same object at 5, 10, and 20 second exposures we would find that the pixel value increases approximately linearly, with the 10 second image having twice the signal as the 5 second image and the 20 second image having 4 times the signal. However, the noise does not increase linearly. The noise in the 10 second image is only ##\sqrt{2}## times the noise in the 5 second image, and the noise in the 20 second image is ##\sqrt{2}## times as much as in the 10 second. So increasing the exposure time from 5 seconds to 20 has increased the signal by 4x but the noise by only ##\sqrt{2}*\sqrt{2}##, or 2x. Hence the SNR has increased by a factor of 2 also. Stacking images is almost identical except for the fact that the readout noise of the sensor is comparatively higher than it would be if you just increased the exposure time.
Narrowband filtering 'removes noise' by blocking all of that pesky background light that you don't want which would only add lots of noise to the image. After all, you'd be able to subtract some quantity from all the pixel values of the image digitally so that this background light wasn't visible except that the noise can potentially be larger than the signal of your target! That's why shooting in heavy light pollution is so bad. The target's signal is swamped by the inherent noise of the ambient light.
For example, when shooting from inside a city, in a 30 second exposure I might see pixel counts of more than 40,000 electrons per pixel all across my sensor. The noise inherent with this background light is roughly ##\sqrt{40,000}##, or ±200 e. Technically I should mention this is a mean, since it's a random variation about some central value. The fluctuation in the measured value of a particular pixel after many different exposures would have a high chance to be within 200 e of that 40,000, a slightly lower chance to be a little more than 200 e above or below, an even lower chance to be a bit further beyond that range, etc. When I compare this to the expected signal of my target, which may only be a hundred electrons per pixel over that 30 seconds or even less, you can see that the variation per pixel because of the noise can much larger than the signal from my target. This is what it means for a signal to be buried in the noise.
I hope I've made myself a bit clearer now.