Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Featured Stargazing Astro Image Stacking (optimize DSS image stacking and post processing)

  1. Jan 10, 2019 #1

    Andy Resnick

    User Avatar
    Science Advisor
    Education Advisor

    I created this thread because every online resource I have examined to date has been largely worthless- either totally over-engineered complexity or superficial garbage (can you tell I am irritated?). The problem is simple: optimize DSS image stacking and post processing based on quantitative image data. The essential metrics are 1) signal-to-noise ratio and 2) dynamic range.

    Here’s the basic scenario: I am in a light-polluted urban environment. I image at low f-number, so fall-off (image non-uniformity) is a significant issue. My primary goal of image stacking (for me) is to completely remove the sky background across the entire field of view and to compress the dynamic range of the field of view to 'amplify' faint objects with respect to bright stars.

    Let’s start with single frames- this already introduces potential confusion. I acquire frames in a 14-bit RAW format, but I have no way of directly accessing that data. So, for this thread, I used Nikon’s software to convert a single channel 14-bit RAW image into a 3-channel 16-bit TIF (RGB format). Most likely, this is done by averaging 4 neighboring same-color pixels in the RAW data to generate a single TIF pixel (14 bits + 2 bits = 16 bits). Here are two images, one taken at f/4 of the Pleiades, and the other taken at f/2.8 of the flame and horsehead nebulae:

    d2bdc0c7-446d-444e-a4f2-d623ff8b8fa5-original.jpg

    1e1836d1-b494-49be-a368-461220bfac6d-original.jpg

    From these, I use ImageJ (now called Fiji) to extract quantitative data. First, I’ll provide a ‘linescan’ from the upper left corner to the lower right corner for each image. This graph returns the greyscale value at each pixel along the line:

    d764577f-a7fc-4935-bcd9-5e37b4bd703f-original.jpg

    c859b4d2-db4b-40c5-be8f-f637f15c9a8b-original.jpg

    There are three basic results here. First, as you can see, the falloff is especially significant at f/2.8. Second, you can see the effect of noise (both images were acquired at ISO 1000), and third, you can see tall spikes where the scan line happens to intersect a star.

    Next, I can use ImageJ to determine a signal-to-noise ratio (SNR) by selecting a small star-fee region in the image and computing the greyscale mean and standard deviation; I measured this in the center of the frame and also near a corner of the frame (I apologize for the non-tabular formatting)

    Image center: corner:
    pleiades 15640 +/- 1317 13239 +/- 1203
    horsehead 26004 +/- 1285 12721 +/- 1188

    This brings up a question: I measured the SNR in terms of greyscale values, not in terms of bits or dB. I can convert the greyscale to dB easily enough, but what will make more sense is to think in terms of bit depth. I’m honestly not sure how to interpret my values in terms of that- it seems that I have about 11 bits of noise, so my 16-bit image has a dynamic range of only 5 bits (or 5 f-stops or 3.7 stellar magnitudes)? That doesn’t make sense. Help?

    Here’s why I care about that question: dynamic range is what I need to maximize in order to separate faint objects from the sky background. Certainly, noise reduction is important, but as you will see, I also need to boost the dynamic range- and post-processing after stacking does this.

    I’ll stop here for now: I’ve established some basic image metrics and determined them for 2 sample images. Moving forward, I have to rely on linescans to illustrate the process: there’s no point to posting the images.
     
  2. jcsd
  3. Jan 10, 2019 #2

    Drakkith

    User Avatar
    Staff Emeritus
    Science Advisor
    2018 Award

    Before we jump into the more complex things, do you take any calibration frames when imaging so you can correct for vignetting, dark current, sensor bias, etc?
     
  4. Jan 10, 2019 #3

    Drakkith

    User Avatar
    Staff Emeritus
    Science Advisor
    2018 Award

    Why is this important for what you want to do? Can you elaborate on what you mean when you say you want to optimize your image stacking and processing?
     
  5. Jan 12, 2019 #4

    Tom.G

    User Avatar
    Science Advisor

    The linescans of the images seem to show both vignetting and COS2 light falloff. There isn't really enough data yet to pin it down. Bright-field Flat-field images at a few different f-stops and horizontal, vertical, and diagonal linescans of them would give a better chance of spotting what is happening.

    A couple approaches come to mind though.
    1) Use a sliding window along a line and subtract the local mean value of the window from each pixel. Actually what gets subtracted must be the mean value weighted by the window width. i.e. If the window is 8 pixels wide, 1/8 of the mean is subtracted each time the window is slid along the line. This is similiar to edge enhancement in digital photography.

    2) Based on inspection of image 2, the background brightness is radially symmetric with a slight curvature at either end. This curve shape is almost classical COS2 light falloff, with an added contribution. As speculation, that added contribution may be a mismatch between the scope aperture, image size and the camera optics. Being radially symmetric it could be modeled analytically and compensated that way.

    Once you get the background flat, you can set a threshold below which all values are set to zero. This may make further processing easier and more effective. In audio processing this is the action of the Squelch control. In image processing, it is setting the black level.

    In image processing, contrast modification, beyond just gain adjustment, is done by adjusting the Gamma of the image. This multiplies each intensity value by a factor proportional, or inversely proportional, to the pixels value.

    Any or all of the above may be available in your current software.

    Hope this helps.

    Cheers,
    Tom

    EDIT: 1st paragraph
     
    Last edited: Jan 12, 2019
  6. Jan 12, 2019 #5

    Andy Resnick

    User Avatar
    Science Advisor
    Education Advisor

    Basically, I want to know simple things like: 1) how many images can I stack before hitting a point of diminishing return? 2) How can I achieve the highest dynamic range when I downsample the resultant 32-bit/channel image to a printable 8 bit/channel image? 3) How can I maintain consistent color rendition? 4) what is the faintest object I can reasonably expect to be able to image?
     
  7. Jan 12, 2019 #6

    Andy Resnick

    User Avatar
    Science Advisor
    Education Advisor

    Thanks, and I agree these are all important aspects to track, but mostly they miss the point. Fall-off happens, how can I best compensate? As I will show, the flat-field accuracy requirement (flat field as compared to bright field) is highly stringent and currently beyond my ability to fully achieve. Even worse, the sky background varies from day to day and is not spatially constant. The background brightness is often not exactly radially symmetric- city lights introduce an asymmetry, and this problem becomes worse when using wider-angle lenses -even at 105mm, the falloff is very asymmetric, see here:

    8d093fec-90ce-4faf-b8c5-08cdb47deb36-original.jpg

    Most of your other points relate to post-processing of the stacked image. Local background subtraction, adjusting gamma, setting black and white points, etc. happens after stacking. However, the major problem I have yet to solve with post-processing is posterization... stay tuned!
     
  8. Jan 12, 2019 #7

    Andy Resnick

    User Avatar
    Science Advisor
    Education Advisor

    I do flat-field correction for the fall-off, but nothing else. Honestly, I don't see how including dark and bias frames will result in a significant improvement (yet).
     
  9. Jan 12, 2019 #8

    Drakkith

    User Avatar
    Staff Emeritus
    Science Advisor
    2018 Award

    Andy, can you send me one of your raw images? I'd like to try something in my image processing software. PM me.
     
  10. Jan 12, 2019 #9

    Tom.G

    User Avatar
    Science Advisor

    My intent was to do the local background subtraction on the individual frames BEFORE stacking. That makes a much wider dynamic range possible and will decrease the tendency for posterization you are plagued with.
    With the huge spatial background variation you are seeing, dark and bias frames are pointless. The background variation completely swamps those effects.

    Cheers,
    Tom

    p.s. I see @Drakkith made an offer while I was typing.
    Drakkith, when/if you do some processing on Andy's image, could you post at least the before and after images?
     
  11. Jan 12, 2019 #10

    Andy Resnick

    User Avatar
    Science Advisor
    Education Advisor

    I am hesitant to do this due to the number of images, but I could try a test run on a dozen images and see what happens-thanks for the suggestion!
     
  12. Jan 12, 2019 #11

    Drakkith

    User Avatar
    Staff Emeritus
    Science Advisor
    2018 Award

    How does that create a wider dynamic range?

    I disagree. I think dark and flat frames should always be taken and subtracted. Otherwise you can't accurately identify the background. Besides, dust donuts are even more prominent on images with bright backgrounds, and flat subtraction removes these.

    Certainly.
     
  13. Jan 12, 2019 #12

    Andy Resnick

    User Avatar
    Science Advisor
    Education Advisor

    Part 2: stacking.

    Thanks for the comments and ideas, hopefully this post will start to clarify some of the technical problems. Recall, I am very much interesting in learning how to quantify images (SNR, dynamic range) in terms of bit depth, and this post should demonstrate why.

    Stacking in DSS results in 32-bit/channel images, so I need to identify a few key concepts. Perhaps the most important is a ‘tone mapping curve’ that maps input grey values to output grey values. Tone mapping does not have to be linear, and I will show that the curve should have a specific shape (‘filmic tone mapping’ or ‘Reinhard tone mapping’) that is highly nonlinear.

    I typically stack about 200 14-bit RAW images at a time, that seems to be the (empirical) point of diminishing returns. Averaging 200 14-bit frames results in a 14+ 8 = 22 bit image which is ‘somehow’ embedded into a 32-bit image. Since the black and white values are fixed, the best (IMO) way to think about the image is in terms of the image histogram and how many bins/buckets are available:

    8-bit image: 255 buckets
    14-bit RAW: 16384 'buckets'
    16-bit TIF: average of 4 pixels, 65536 'buckets'
    32-bit TIF: 4294967296 'buckets'
    average 200 14-bit RAW = 22 bit image = 4194304 'buckets'

    This is the origin of the next problem I encounter: 4194304 buckets are dispersed into 4294967296 slots, resulting in a sparse image histogram. DSS then allows me to downsample and save the 32-bit image to 16-bits, which is what I need for post-processing.

    Let’s look at what happens, starting with the f/4 Pleiades image, since flat-field correction isn’t required. The tone map curve has a horizontal (input) axis and vertical (output) axis, and DSS allows a lot of different ways to apply a tone map. First, I’ll show the most simple tone map: linear-linear, where both axes are linear scales. Here’s the resultant linescan, when the horizontal axis has 4294967296 bins and the vertical axis has 4194304 bins:

    e2fa0983-a67a-40df-8a6c-b82358cbffc7-original.jpg

    The predominant feature is ‘posterization’: not every 16-bit grey value occurs in the downsampled image because most of the input 32-bit buckets are empty. So, although I would expect the background level to be a smooth curve, there are instead discrete values present. As expected, the SNR is greatly decreased: the grey level statistics are:

    center: corner:
    7887+/- 14 7476 +/- 3

    The noise level has decreased from 11 bits to 3 or 4 bits, in agreement with the number of images averaged together. But more problematic, the dynamic range of the image has also decreased! Look at the height of the ‘star spike’ as compared to the background level- it’s much lower as compared to the single RAW image. What this means is that using this tone map, there’s no way I can extract out the nebula features, faint stars, or anything that is ‘slightly above’ the background level.

    So instead, I use a different tone map- the ‘filmic’ tone map. I also use an alternate horizontal axis, ‘log-root’ scaling, so that the background is located more in the midrange tones than the dark tones, which is what happens for linear scaling. Using that tone map, look at the linescan:

    ecca1142-9808-43af-956b-24635bbf4c77-original.jpg

    The posterization is less pronounced, but the noise is also higher (still less than in individual frames):

    center: corner:
    18088 +/- 458 8505 +/- 148

    I have also increased the dynamic range- I have assigned more ‘buckets’ in the neighborhood of the background level, ‘stretching’ the image contrast at midrange tones and allowing me (in post-processing) to separate the nebula from the background. You can see this in the number of 'star spikes' that are simply not present in the linear-linear linescan. Another side effect from this tone map is that stars get larger- sometimes considerably so. It’s also true that I have ‘amplified’ the falloff, so perhaps flat field correction would be helpful here. But this is next tricky bit: accurate flat field correction. I’ll cover that next.

    So, at this point in the process, I am stuck with a highly posterized image. There may be hope that when I downsample again to 8 bits/channel this can be removed, but to date I have been unsuccessful. The posterization also causes problems with color consistency. Maybe some of you have better tone mapping strategies, let me know!

    But all is not lost- what I learned to do was to treat each 200-image stack as a ‘substack’; stacking together 4 or 5 16-bit tone mapped substacks decreases the amount of posterization present.
     
  14. Jan 12, 2019 #13

    davenn

    User Avatar
    Science Advisor
    Gold Member

    and how are you doing that ?

    Your images and graphs would indicate that you are not doing dark or flat frames or if you are, you are not doing them correctly
    Because if your were doing them correctly then you wouldn't see that huge variation in background brightness across those images
    Brighter centres of the images is a sure sign of a lack of flat and dark frames, the frames would have a flatter brightness across the entire frame

    Your other big issue of signal to noise level is because of your very short exposure times ... I think you stated 8 sec in the other thread

    There is NO SUBSTITUTE for longer exposures when it comes to SNR ... by default, it automatically produces better SNR
    Many 8 sec exposures will never produce the SNR that a single 5 minute exposure will produce

    My 30 sec exposures do better than your 8 sec but mine would still vastly improve with exposures of 1 minute or more for each stacked frame


    As much as I hate the statement "I have seen" I need to use it as I don't have the time to scour through dozens of posts on a
    number of different forums..... anywhere from 30 - 70 frames would be in the range of common .... so lets avg that and say
    50 frames stacked. ( I will try and find some references).
    Now, again, that will also depend somewhat on your exposure times.
    1.5 hrs (90 mins) of 5 minute exposures ( 18 exposures) is going to provide a vastly superior image compared to 1.5hrs
    of 8 sec exposures (675 exposures) again because of the better SNR that 5 min exposures will provide, even tho the total time is the same.

    a good example ......

    my 20 lights, 9 darks, no flats
    ETA CARINA 20Lw9D 400mm Sequator1sm.jpg

    my mate's 156 x 40 sec lights 16 darks

    eta carina - MichaelM.jpg


    Did that extra ~ 130 light frames produce a better image ? ..... honestly, I don't think so :frown:


    I would, quite safely, state that I doubt that 99.9% of astrophotographers would even contemplate most of what you have
    commented on in your first post .... I, for sure, don't :wink:

    Why do you think I make the effort to load all my kit into the car and head to a darker site ? :smile:

    If you or I really want to do serious imaging from home, deep in the sky glow of suburbia, the ONLY way to do it is to use
    narrow band filtering .... Ha, OIII, Astronomik CLS Filter etc ..... you can then produce awesome images even during a,
    close to your target, full moon ......


    just some examples, you can read up further ......

    Astronomik CLS Filter
    https://www.astronomik.com/en/visual-filters/cls-filter.html

    IDAS P2 LPS filter
    https://www.sciencecenter.net/hutech/idas/lps.htm

    Astronomik H-Alpha 12nm CCD Filter - Canon EOS APS Clip
    https://optcorp.com/products/astronomik-h-alpha-12nm-ccd-filter-canon-eos-aps-clip


    Basically, if you want to do imaging from within a light polluted location, you really have no choice but to use filters
    Then you can up your exposure times ( assuming your mount is a good tracking mount and well polar aligned ??)
    Otherwise you are just asking DSS or other stacker and post processing program to do things it cannot do



    Dave
     
  15. Jan 12, 2019 #14

    Drakkith

    User Avatar
    Staff Emeritus
    Science Advisor
    2018 Award

    I disagree. The total exposure time with 8 sec frames will need to be higher, but you should have little trouble getting equal and better SNR with enough 8 sec frames.

    Perhaps I'm misreading something. Were your images taken with twenty 30 sec frames? If so, your mate's picture must have been taken in heavily light polluted skies or something, as he has more frames at longer exposure times. His image should look at least as good as yours, if not substantially better.
     
  16. Jan 12, 2019 #15

    davenn

    User Avatar
    Science Advisor
    Gold Member

    And speaking of DSS

    I have almost deleted it off my computer, as I have been getting more and more frustrated with its refusal to align a bunch of frames
    because it thinks there are not enough stars to do the alignment with.

    I have started using Sequator
    https://sites.google.com/site/sequatorglobal/

    It's so easy to use
     
  17. Jan 12, 2019 #16

    Tom.G

    User Avatar
    Science Advisor

    In light of the OPs later post it doesn't. The posted image lost 70% of the dynamic range to background, and I was assuming the whole processing chain used the same number of bits... and I probably should have said the available dynamic range.

    OK, I'll go along with that. I was apparently over-simplifying in trying to get rid of the gross errors first! :frown:
     
  18. Jan 12, 2019 #17

    davenn

    User Avatar
    Science Advisor
    Gold Member

    no, I have seen the opposite so often ..... I cant express the results mathematically etc ... just the physical results
    I have never seen where multiple very short exposures, say Andy's 8 sec exp, will equal a longer single exp.
    The longer single exp. will always produce a better SNR



    we were at the same site :smile:

    This is what I was saying in that post .............. there is a point where extra exposures don't add to the image. which is what I was addressing in Andy's Q

    I see in a later post of his he is talking of 200 exposures stacked. Honestly I think that is a waste of time
     
  19. Jan 12, 2019 #18

    Tom.G

    User Avatar
    Science Advisor

    The two images @davenn posted look like they were processed with different Gamma and Black level threshold, with the second image having both settings at a higher level.
     
  20. Jan 12, 2019 #19

    davenn

    User Avatar
    Science Advisor
    Gold Member


    possibly :smile:

    I may ask him for 20 of his frames and process them myself in the same way I did with mine
    It would be interesting to them see any differences
     
  21. Jan 12, 2019 #20

    Drakkith

    User Avatar
    Staff Emeritus
    Science Advisor
    2018 Award

    So you're saying no amount of 5 second exposures will equal or beat a single 10 second exposure? I can say with quite a bit of confidence that this isn't true, as I've stacked many short exposures before. The problem with short exposures compared to longer exposures is that the noise from the sensor (and perhaps a few other electronic sources) during readout is the same regardless of the length of the exposure. If this noise is a significant portion of the total noise, then you're trying to 'beat the noise down' with stacking, which is subject to a non-linear effect (namely that doubling the SNR, equivalent to halving the noise, requires quadrupling the number of exposures). But with longer exposures, this source of noise is much less significant. I can show a bit of math if you'd like.

    Then there must be a big difference in gear, optics, processing, or something. With more frames and longer exposures his image should be much, much better.
     
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook

Have something to add?