1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I How to do the Dark Frame Subtraction? (camera imaging)

  1. Jan 16, 2017 #1
    I have an image with uneven contrast. Now to correct the image, I need a dark frame to remove the noise inherent to the sensor or to the camera itself. How can use the dark frame into the image? I have been searching the net but most of them use photoshop or similar softwares. How can I apply it mathematically? I was thinking of reversing the values of the dark frame (a pixel value of 100 will become 255 - 100 = 455, considering an 8-bit image), and then subtracting it to the image (although I am not sure if I need to subtract or divide it into the image). Is this correct?
     
  2. jcsd
  3. Jan 16, 2017 #2

    Drakkith

    User Avatar

    Staff: Mentor

    You should just need to subtract the value of each dark frame pixel from the corresponding light frame pixel.
     
  4. Jan 16, 2017 #3
    Is it really that simple? Will it even-out the contrast of the image? (I can't try it right now because I don't have access to my files). :biggrin:
     
  5. Jan 16, 2017 #4

    Drakkith

    User Avatar

    Staff: Mentor

    I don't know. I don't know what your images look like or anything.
     
  6. Jan 16, 2017 #5

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member

    @ Drakkith: Just hitching a ride here but it's a topic that I'm trying to get across.
    This statement could make sense, as long as the noise from the sensor is unvarying from pixel to pixel [Edit: I mean from picture to picture]. I doubt that it is, though, so what does the dark frame do for you apart from removing hot pixels, that will be there fairly steadily as long as your exposure times are the same? Apart from that, I guess the dark frame will give you a good Black Level. Several darks could be averaged to give you a better black level. Am I missing something here?
    Also, to take care of varying contrast, isn't it good practice to take 'flats', too? These can be used to vary the gain across the image to compensate for variation over the flat?
     
    Last edited: Jan 16, 2017
  7. Jan 16, 2017 #6
    To correct an image with uneven illumination, you generally use a white reference image and a dark reference image, assuming the former is available, as it would be if your images were from a microscope for example where you can get an image with a blank slide. The equation to use is in the following link, right at the beginning of the section titled "1. Correction from a Dark Image and a Bright Image") https://clouard.users.greyc.fr/Pantheon/experiments/illumination-correction/index-en.html :

    g(x,y) = [f(x,y) - d(x,y)] / [b(x,y) - d(x,y)] C

    where f is your image, d is the dark reference and b is the white reference. C is a scaling factor you need because the ratio results in real numbers less than or equal to 1.0 and you want the result to be in the 0 -> 255 range, for an 8-bit image.

    Of course, if you are just given images created elsewhere with the shading already baked in, then this can't be used as you don't know the reference images are. Other methods are available, however.
     
  8. Jan 16, 2017 #7
    There is often a fairly stable spatial variation in the dark frame that contributes to the unevenness in the image.
     
  9. Jan 16, 2017 #8
    Yes, it is IF that is the problem you have, but in practice it is more complicated and the issue you have could be caused by something else like light leakage or light pollution (hard to tell without an image).

    First a nitpick, you can never remove noise from a image (signal), when you say remove noise you actually should say improve signal to noise ratio. You basic digital image has three signals, the photon signal (contains the data you want but virtually always other thing as well like air-glow or light pollution), dark current (depends on the properties of individual pixels, the exposure time and temperature of the pixels) and bias signal (an offset to make sure that the ADC doesn't clip - also per pixel).

    If the dark current and bias signal in the dark frames (preferably the average/median/Sigma-kappa/etc of many frames) match the signals in the photon signal will be left with just the light signal after subtraction but all the noise from the initial exposures plus the noise from the darks. There are however many ways darks might fail to match the original exposures, among them, light leakage, mismatched chip temperatures, amp glow that only appears after many exposures, exposures or darks with different exposure times or changing temperature and many more.
     
  10. Jan 16, 2017 #9

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member

    Interesting. That could imply that it might be possible to build up a library of Darks (for the various exposure conditions and camera temperatures and never bother with making any subsequent exposures.
     
  11. Jan 16, 2017 #10
    Yes, it is very common if you have a termostat controlled camera.
     
  12. Jan 16, 2017 #11

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member

    Well, you live and learn.
     
  13. Jan 16, 2017 #12

    Andy Resnick

    User Avatar
    Science Advisor
    Education Advisor
    2016 Award

    As others have noted, it's not clear what you are trying to do. Dark frame correction is not the same as non uniformity correction (NUC). "Uneven contrast" can mean lots of things- posting a 'bad' image would be helpful. NUC algorithms are an active area of research for a variety of reasons. In addition to pixel's recommended URL (which seems reasonable), a few other sites are:

    https://www.mathworks.com/help/imag...nation.html?requestedDomain=www.mathworks.com
    http://homepages.udayton.edu/~hardierc/index_files/Page610.htm

    But without any other information from you, it's hard to recommend potential solutions.
     
  14. Jan 17, 2017 #13
    What do you mean by 'flats'?

    If I understand correctly, does the correction depends on the images? What if I just want to decrease the noise from the camera itself; and I have a dark frame of the camera taken at the same parameters as an image, should I just subtract the dark frame from the image?
     
  15. Jan 17, 2017 #14

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member

    I'm no expert but, afaik, a Flat is an image of a flat, uniformly illuminated pale surface (not peak white). It tells you how the gain / sensitivity varies over the sensor + optics (it shows vignetting etc.) You use it to modify your wanted image gains all over by multiplying by a correcting factor, inversely proportional to the values on the flat image. I imagine it can help with colour correction too.
     
  16. Jan 17, 2017 #15
    The dark frame you captured separately will not have exactly the same noise as the dark frame contribution to the noise in your image, due to the random nature. You can subtract the dark frame to remove the offset due to the dark signal. To reduce noise, you should do frame averaging of dark and image (and do a subtraction to remove the offset).
     
  17. Jan 17, 2017 #16

    Drakkith

    User Avatar

    Staff: Mentor

    To be clear, you cannot decrease the noise or improve the signal-to-noise ratio a single image once it is taken. All dark frames and flat frames do is remove the dark current and bias signal from the image and correct for things like hot-pixels, vignetting of the image, and other things which reduce or increase the signal on each pixel. The noise inherent in the image is made up of shot noise, dark current noise, readout noise, and others, none of which can be corrected for in a single image as far as I know.

    The term "noise" here means a random variation in the acquired signal that causes each pixel value to fluctuate values between otherwise identical images.
    The term "signal" here means anything which adds to the final value for each pixel in the image (other than noise). This includes acquired photons, dark current, and bias signal.
     
  18. Jan 17, 2017 #17

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member

    The Darks will have worthwhile information about hot pixels which could be interpreted as stars in the stacking process (?). Averaging a number of dark frames would presumably be worth while, to get a better black level.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: How to do the Dark Frame Subtraction? (camera imaging)
Loading...