Astro Image Stacking (optimize DSS image stacking and post processing)

In summary: I don’t know, explain what you want to do?The goal of image stacking (for me) is to completely remove the sky background across the entire field of view and to compress the dynamic range of the field of view to 'amplify' faint objects with respect to bright stars.
  • #1
Andy Resnick
Science Advisor
Education Advisor
Insights Author
7,374
3,037
I created this thread because every online resource I have examined to date has been largely worthless- either totally over-engineered complexity or superficial garbage (can you tell I am irritated?). The problem is simple: optimize DSS image stacking and post processing based on quantitative image data. The essential metrics are 1) signal-to-noise ratio and 2) dynamic range.

Here’s the basic scenario: I am in a light-polluted urban environment. I image at low f-number, so fall-off (image non-uniformity) is a significant issue. My primary goal of image stacking (for me) is to completely remove the sky background across the entire field of view and to compress the dynamic range of the field of view to 'amplify' faint objects with respect to bright stars.

Let’s start with single frames- this already introduces potential confusion. I acquire frames in a 14-bit RAW format, but I have no way of directly accessing that data. So, for this thread, I used Nikon’s software to convert a single channel 14-bit RAW image into a 3-channel 16-bit TIF (RGB format). Most likely, this is done by averaging 4 neighboring same-color pixels in the RAW data to generate a single TIF pixel (14 bits + 2 bits = 16 bits). Here are two images, one taken at f/4 of the Pleiades, and the other taken at f/2.8 of the flame and horsehead nebulae:

d2bdc0c7-446d-444e-a4f2-d623ff8b8fa5-original.jpg


1e1836d1-b494-49be-a368-461220bfac6d-original.jpg


From these, I use ImageJ (now called Fiji) to extract quantitative data. First, I’ll provide a ‘linescan’ from the upper left corner to the lower right corner for each image. This graph returns the greyscale value at each pixel along the line:

d764577f-a7fc-4935-bcd9-5e37b4bd703f-original.jpg


c859b4d2-db4b-40c5-be8f-f637f15c9a8b-original.jpg


There are three basic results here. First, as you can see, the falloff is especially significant at f/2.8. Second, you can see the effect of noise (both images were acquired at ISO 1000), and third, you can see tall spikes where the scan line happens to intersect a star.

Next, I can use ImageJ to determine a signal-to-noise ratio (SNR) by selecting a small star-fee region in the image and computing the greyscale mean and standard deviation; I measured this in the center of the frame and also near a corner of the frame (I apologize for the non-tabular formatting)

Image center: corner:
pleiades 15640 +/- 1317 13239 +/- 1203
horsehead 26004 +/- 1285 12721 +/- 1188

This brings up a question: I measured the SNR in terms of greyscale values, not in terms of bits or dB. I can convert the greyscale to dB easily enough, but what will make more sense is to think in terms of bit depth. I’m honestly not sure how to interpret my values in terms of that- it seems that I have about 11 bits of noise, so my 16-bit image has a dynamic range of only 5 bits (or 5 f-stops or 3.7 stellar magnitudes)? That doesn’t make sense. Help?

Here’s why I care about that question: dynamic range is what I need to maximize in order to separate faint objects from the sky background. Certainly, noise reduction is important, but as you will see, I also need to boost the dynamic range- and post-processing after stacking does this.

I’ll stop here for now: I’ve established some basic image metrics and determined them for 2 sample images. Moving forward, I have to rely on linescans to illustrate the process: there’s no point to posting the images.
 

Attachments

  • d2bdc0c7-446d-444e-a4f2-d623ff8b8fa5-original.jpg
    d2bdc0c7-446d-444e-a4f2-d623ff8b8fa5-original.jpg
    9.6 KB · Views: 1,052
  • 1e1836d1-b494-49be-a368-461220bfac6d-original.jpg
    1e1836d1-b494-49be-a368-461220bfac6d-original.jpg
    11.8 KB · Views: 968
  • d764577f-a7fc-4935-bcd9-5e37b4bd703f-original.jpg
    d764577f-a7fc-4935-bcd9-5e37b4bd703f-original.jpg
    37.1 KB · Views: 1,024
  • c859b4d2-db4b-40c5-be8f-f637f15c9a8b-original.jpg
    c859b4d2-db4b-40c5-be8f-f637f15c9a8b-original.jpg
    43.3 KB · Views: 981
  • Like
Likes DennisN, davenn and berkeman
Astronomy news on Phys.org
  • #2
Before we jump into the more complex things, do you take any calibration frames when imaging so you can correct for vignetting, dark current, sensor bias, etc?
 
  • Like
Likes russ_watters and davenn
  • #3
Andy Resnick said:
This brings up a question: I measured the SNR in terms of greyscale values, not in terms of bits or dB. I can convert the greyscale to dB easily enough, but what will make more sense is to think in terms of bit depth. I’m honestly not sure how to interpret my values in terms of that- it seems that I have about 11 bits of noise, so my 16-bit image has a dynamic range of only 5 bits (or 5 f-stops or 3.7 stellar magnitudes)? That doesn’t make sense. Help?

Here’s why I care about that question: dynamic range is what I need to maximize in order to separate faint objects from the sky background. Certainly, noise reduction is important, but as you will see, I also need to boost the dynamic range- and post-processing after stacking does this.

Why is this important for what you want to do? Can you elaborate on what you mean when you say you want to optimize your image stacking and processing?
 
  • Like
Likes davenn
  • #4
The linescans of the images seem to show both vignetting and COS2 light falloff. There isn't really enough data yet to pin it down. Bright-field Flat-field images at a few different f-stops and horizontal, vertical, and diagonal linescans of them would give a better chance of spotting what is happening.

A couple approaches come to mind though.
1) Use a sliding window along a line and subtract the local mean value of the window from each pixel. Actually what gets subtracted must be the mean value weighted by the window width. i.e. If the window is 8 pixels wide, 1/8 of the mean is subtracted each time the window is slid along the line. This is similar to edge enhancement in digital photography.

2) Based on inspection of image 2, the background brightness is radially symmetric with a slight curvature at either end. This curve shape is almost classical COS2 light falloff, with an added contribution. As speculation, that added contribution may be a mismatch between the scope aperture, image size and the camera optics. Being radially symmetric it could be modeled analytically and compensated that way.

Once you get the background flat, you can set a threshold below which all values are set to zero. This may make further processing easier and more effective. In audio processing this is the action of the Squelch control. In image processing, it is setting the black level.

In image processing, contrast modification, beyond just gain adjustment, is done by adjusting the Gamma of the image. This multiplies each intensity value by a factor proportional, or inversely proportional, to the pixels value.

Any or all of the above may be available in your current software.

Hope this helps.

Cheers,
Tom

EDIT: 1st paragraph
 
Last edited:
  • Like
Likes russ_watters
  • #5
Drakkith said:
Why is this important for what you want to do? Can you elaborate on what you mean when you say you want to optimize your image stacking and processing?

Basically, I want to know simple things like: 1) how many images can I stack before hitting a point of diminishing return? 2) How can I achieve the highest dynamic range when I downsample the resultant 32-bit/channel image to a printable 8 bit/channel image? 3) How can I maintain consistent color rendition? 4) what is the faintest object I can reasonably expect to be able to image?
 
  • #6
Tom.G said:
The linescans of the images seem to show both vignetting and COS2 light falloff. There isn't really enough data yet to pin it down. Bright-field Flat-field images at a few different f-stops and horizontal, vertical, and diagonal linescans of them would give a better chance of spotting what is happening.

A couple approaches come to mind though.
<snip>

Hope this helps.

Cheers,
Tom

Thanks, and I agree these are all important aspects to track, but mostly they miss the point. Fall-off happens, how can I best compensate? As I will show, the flat-field accuracy requirement (flat field as compared to bright field) is highly stringent and currently beyond my ability to fully achieve. Even worse, the sky background varies from day to day and is not spatially constant. The background brightness is often not exactly radially symmetric- city lights introduce an asymmetry, and this problem becomes worse when using wider-angle lenses -even at 105mm, the falloff is very asymmetric, see here:

8d093fec-90ce-4faf-b8c5-08cdb47deb36-original.jpg


Most of your other points relate to post-processing of the stacked image. Local background subtraction, adjusting gamma, setting black and white points, etc. happens after stacking. However, the major problem I have yet to solve with post-processing is posterization... stay tuned!
 

Attachments

  • 8d093fec-90ce-4faf-b8c5-08cdb47deb36-original.jpg
    8d093fec-90ce-4faf-b8c5-08cdb47deb36-original.jpg
    28.2 KB · Views: 928
  • #7
Drakkith said:
Before we jump into the more complex things, do you take any calibration frames when imaging so you can correct for vignetting, dark current, sensor bias, etc?

I do flat-field correction for the fall-off, but nothing else. Honestly, I don't see how including dark and bias frames will result in a significant improvement (yet).
 
  • #8
Andy, can you send me one of your raw images? I'd like to try something in my image processing software. PM me.
 
  • #9
Andy Resnick said:
Local background subtraction, adjusting gamma, setting black and white points, etc. happens after stacking.
My intent was to do the local background subtraction on the individual frames BEFORE stacking. That makes a much wider dynamic range possible and will decrease the tendency for posterization you are plagued with.
Andy Resnick said:
I do flat-field correction for the fall-off, but nothing else. Honestly, I don't see how including dark and bias frames will result in a significant improvement (yet).
With the huge spatial background variation you are seeing, dark and bias frames are pointless. The background variation completely swamps those effects.

Cheers,
Tom

p.s. I see @Drakkith made an offer while I was typing.
Drakkith, when/if you do some processing on Andy's image, could you post at least the before and after images?
 
  • #10
Tom.G said:
My intent was to do the local background subtraction on the individual frames BEFORE stacking. That makes a much wider dynamic range possible and will decrease the tendency for posterization you are plagued with.

I am hesitant to do this due to the number of images, but I could try a test run on a dozen images and see what happens-thanks for the suggestion!
 
  • #11
Tom.G said:
My intent was to do the local background subtraction on the individual frames BEFORE stacking. That makes a much wider dynamic range possible and will decrease the tendency for posterization you are plagued with

How does that create a wider dynamic range?

Tom.G said:
With the huge spatial background variation you are seeing, dark and bias frames are pointless. The background variation completely swamps those effects.

I disagree. I think dark and flat frames should always be taken and subtracted. Otherwise you can't accurately identify the background. Besides, dust donuts are even more prominent on images with bright backgrounds, and flat subtraction removes these.

Tom.G said:
Drakkith, when/if you do some processing on Andy's image, could you post at least the before and after images?

Certainly.
 
  • Like
Likes russ_watters and davenn
  • #12
Part 2: stacking.

Thanks for the comments and ideas, hopefully this post will start to clarify some of the technical problems. Recall, I am very much interesting in learning how to quantify images (SNR, dynamic range) in terms of bit depth, and this post should demonstrate why.

Stacking in DSS results in 32-bit/channel images, so I need to identify a few key concepts. Perhaps the most important is a ‘tone mapping curve’ that maps input grey values to output grey values. Tone mapping does not have to be linear, and I will show that the curve should have a specific shape (‘filmic tone mapping’ or ‘Reinhard tone mapping’) that is highly nonlinear.

I typically stack about 200 14-bit RAW images at a time, that seems to be the (empirical) point of diminishing returns. Averaging 200 14-bit frames results in a 14+ 8 = 22 bit image which is ‘somehow’ embedded into a 32-bit image. Since the black and white values are fixed, the best (IMO) way to think about the image is in terms of the image histogram and how many bins/buckets are available:

8-bit image: 255 buckets
14-bit RAW: 16384 'buckets'
16-bit TIF: average of 4 pixels, 65536 'buckets'
32-bit TIF: 4294967296 'buckets'
average 200 14-bit RAW = 22 bit image = 4194304 'buckets'

This is the origin of the next problem I encounter: 4194304 buckets are dispersed into 4294967296 slots, resulting in a sparse image histogram. DSS then allows me to downsample and save the 32-bit image to 16-bits, which is what I need for post-processing.

Let’s look at what happens, starting with the f/4 Pleiades image, since flat-field correction isn’t required. The tone map curve has a horizontal (input) axis and vertical (output) axis, and DSS allows a lot of different ways to apply a tone map. First, I’ll show the most simple tone map: linear-linear, where both axes are linear scales. Here’s the resultant linescan, when the horizontal axis has 4294967296 bins and the vertical axis has 4194304 bins:

e2fa0983-a67a-40df-8a6c-b82358cbffc7-original.jpg


The predominant feature is ‘posterization’: not every 16-bit grey value occurs in the downsampled image because most of the input 32-bit buckets are empty. So, although I would expect the background level to be a smooth curve, there are instead discrete values present. As expected, the SNR is greatly decreased: the grey level statistics are:

center: corner:
7887+/- 14 7476 +/- 3

The noise level has decreased from 11 bits to 3 or 4 bits, in agreement with the number of images averaged together. But more problematic, the dynamic range of the image has also decreased! Look at the height of the ‘star spike’ as compared to the background level- it’s much lower as compared to the single RAW image. What this means is that using this tone map, there’s no way I can extract out the nebula features, faint stars, or anything that is ‘slightly above’ the background level.

So instead, I use a different tone map- the ‘filmic’ tone map. I also use an alternate horizontal axis, ‘log-root’ scaling, so that the background is located more in the midrange tones than the dark tones, which is what happens for linear scaling. Using that tone map, look at the linescan:

ecca1142-9808-43af-956b-24635bbf4c77-original.jpg


The posterization is less pronounced, but the noise is also higher (still less than in individual frames):

center: corner:
18088 +/- 458 8505 +/- 148

I have also increased the dynamic range- I have assigned more ‘buckets’ in the neighborhood of the background level, ‘stretching’ the image contrast at midrange tones and allowing me (in post-processing) to separate the nebula from the background. You can see this in the number of 'star spikes' that are simply not present in the linear-linear linescan. Another side effect from this tone map is that stars get larger- sometimes considerably so. It’s also true that I have ‘amplified’ the falloff, so perhaps flat field correction would be helpful here. But this is next tricky bit: accurate flat field correction. I’ll cover that next.

So, at this point in the process, I am stuck with a highly posterized image. There may be hope that when I downsample again to 8 bits/channel this can be removed, but to date I have been unsuccessful. The posterization also causes problems with color consistency. Maybe some of you have better tone mapping strategies, let me know!

But all is not lost- what I learned to do was to treat each 200-image stack as a ‘substack’; stacking together 4 or 5 16-bit tone mapped substacks decreases the amount of posterization present.
 

Attachments

  • e2fa0983-a67a-40df-8a6c-b82358cbffc7-original.jpg
    e2fa0983-a67a-40df-8a6c-b82358cbffc7-original.jpg
    21.2 KB · Views: 841
  • ecca1142-9808-43af-956b-24635bbf4c77-original.jpg
    ecca1142-9808-43af-956b-24635bbf4c77-original.jpg
    29.3 KB · Views: 872
  • #13
Drakkith said:
Before we jump into the more complex things, do you take any calibration frames when imaging so you can correct for vignetting, dark current, sensor bias, etc?

Andy Resnick said:
Basically, I want to know simple things like: 1) how many images can I stack before hitting a point of diminishing return? 2) How can I achieve the highest dynamic range when I downsample the resultant 32-bit/channel image to a printable 8 bit/channel image? 3) How can I maintain consistent color rendition? 4) what is the faintest object I can reasonably expect to be able to image?

Andy Resnick said:
I do flat-field correction for the fall-off, but nothing else. Honestly, I don't see how including dark and bias frames will result in a significant improvement (yet).

and how are you doing that ?

Your images and graphs would indicate that you are not doing dark or flat frames or if you are, you are not doing them correctly
Because if your were doing them correctly then you wouldn't see that huge variation in background brightness across those images
Brighter centres of the images is a sure sign of a lack of flat and dark frames, the frames would have a flatter brightness across the entire frame

Your other big issue of signal to noise level is because of your very short exposure times ... I think you stated 8 sec in the other thread

There is NO SUBSTITUTE for longer exposures when it comes to SNR ... by default, it automatically produces better SNR
Many 8 sec exposures will never produce the SNR that a single 5 minute exposure will produce

My 30 sec exposures do better than your 8 sec but mine would still vastly improve with exposures of 1 minute or more for each stacked frame
Andy Resnick said:
Basically, I want to know simple things like: 1) how many images can I stack before
hitting a point of diminishing return?

As much as I hate the statement "I have seen" I need to use it as I don't have the time to scour through dozens of posts on a
number of different forums... anywhere from 30 - 70 frames would be in the range of common ... so let's avg that and say
50 frames stacked. ( I will try and find some references).
Now, again, that will also depend somewhat on your exposure times.
1.5 hrs (90 mins) of 5 minute exposures ( 18 exposures) is going to provide a vastly superior image compared to 1.5hrs
of 8 sec exposures (675 exposures) again because of the better SNR that 5 min exposures will provide, even tho the total time is the same.

a good example ...

my 20 lights, 9 darks, no flats
ETA CARINA 20Lw9D 400mm Sequator1sm.jpg


my mate's 156 x 40 sec lights 16 darks

eta carina - MichaelM.jpg
Did that extra ~ 130 light frames produce a better image ? ... honestly, I don't think so :frown:I would, quite safely, state that I doubt that 99.9% of astrophotographers would even contemplate most of what you have
commented on in your first post ... I, for sure, don't :wink:

Why do you think I make the effort to load all my kit into the car and head to a darker site ? :smile:

If you or I really want to do serious imaging from home, deep in the sky glow of suburbia, the ONLY way to do it is to use
narrow band filtering ... Ha, OIII, Astronomik CLS Filter etc ... you can then produce awesome images even during a,
close to your target, full moon ...just some examples, you can read up further ...

Astronomik CLS Filter
https://www.astronomik.com/en/visual-filters/cls-filter.html

IDAS P2 LPS filter
https://www.sciencecenter.net/hutech/idas/lps.htm

Astronomik H-Alpha 12nm CCD Filter - Canon EOS APS Clip
https://optcorp.com/products/astronomik-h-alpha-12nm-ccd-filter-canon-eos-aps-clipBasically, if you want to do imaging from within a light polluted location, you really have no choice but to use filters
Then you can up your exposure times ( assuming your mount is a good tracking mount and well polar aligned ??)
Otherwise you are just asking DSS or other stacker and post processing program to do things it cannot do
Dave
 

Attachments

  • ETA CARINA 20Lw9D 400mm Sequator1sm.jpg
    ETA CARINA 20Lw9D 400mm Sequator1sm.jpg
    134.4 KB · Views: 912
  • eta carina - MichaelM.jpg
    eta carina - MichaelM.jpg
    120.7 KB · Views: 885
  • Like
Likes russ_watters
  • #14
davenn said:
There is NO SUBSTITUTE for longer exposures when it comes to SNR ... by default, it automatically produces better SNR
Many 8 sec exposures will never produce the SNR that a single 5 minute exposure will produce

I disagree. The total exposure time with 8 sec frames will need to be higher, but you should have little trouble getting equal and better SNR with enough 8 sec frames.

davenn said:
Did that extra ~ 130 light frames produce a better image ? ... honestly, I don't think so :frown:

Perhaps I'm misreading something. Were your images taken with twenty 30 sec frames? If so, your mate's picture must have been taken in heavily light polluted skies or something, as he has more frames at longer exposure times. His image should look at least as good as yours, if not substantially better.
 
  • #15
And speaking of DSS

I have almost deleted it off my computer, as I have been getting more and more frustrated with its refusal to align a bunch of frames
because it thinks there are not enough stars to do the alignment with.

I have started using Sequator
https://sites.google.com/site/sequatorglobal/

It's so easy to use
 
  • #16
Drakkith said:
How does that create a wider dynamic range?
In light of the OPs later post it doesn't. The posted image lost 70% of the dynamic range to background, and I was assuming the whole processing chain used the same number of bits... and I probably should have said the available dynamic range.

Drakkith said:
I disagree. I think dark and flat frames should always be taken and subtracted. Otherwise you can't accurately identify the background. Besides, dust donuts are even more prominent on images with bright backgrounds, and flat subtraction removes these.
OK, I'll go along with that. I was apparently over-simplifying in trying to get rid of the gross errors first! :frown:
 
  • #17
Drakkith said:
disagree. The total exposure time with 8 sec frames will need to be higher, but you should have little trouble getting equal and better SNR with enough 8 sec frames.

no, I have seen the opposite so often ... I can't express the results mathematically etc ... just the physical results
I have never seen where multiple very short exposures, say Andy's 8 sec exp, will equal a longer single exp.
The longer single exp. will always produce a better SNR
Drakkith said:
Perhaps I'm misreading something. Were your images taken with twenty 30 sec frames? If so, your mate's picture must have been taken in heavily light polluted skies or something, as he has more frames at longer exposure times. His image should look at least as good as yours, if not substantially better.
we were at the same site :smile:

This is what I was saying in that post ..... there is a point where extra exposures don't add to the image. which is what I was addressing in Andy's Q

I see in a later post of his he is talking of 200 exposures stacked. Honestly I think that is a waste of time
 
  • #18
The two images @davenn posted look like they were processed with different Gamma and Black level threshold, with the second image having both settings at a higher level.
 
  • #19
Tom.G said:
The two images @davenn posted look like they were processed with different Gamma and Black level threshold, with the second image having both settings at a higher level.
possibly :smile:

I may ask him for 20 of his frames and process them myself in the same way I did with mine
It would be interesting to them see any differences
 
  • Like
Likes Tom.G
  • #20
davenn said:
no, I have seen the opposite so often ... I can't express the results mathematically etc ... just the physical results
I have never seen where multiple very short exposures, say Andy's 8 sec exp, will equal a longer single exp

So you're saying no amount of 5 second exposures will equal or beat a single 10 second exposure? I can say with quite a bit of confidence that this isn't true, as I've stacked many short exposures before. The problem with short exposures compared to longer exposures is that the noise from the sensor (and perhaps a few other electronic sources) during readout is the same regardless of the length of the exposure. If this noise is a significant portion of the total noise, then you're trying to 'beat the noise down' with stacking, which is subject to a non-linear effect (namely that doubling the SNR, equivalent to halving the noise, requires quadrupling the number of exposures). But with longer exposures, this source of noise is much less significant. I can show a bit of math if you'd like.

davenn said:
we were at the same site :smile:

Then there must be a big difference in gear, optics, processing, or something. With more frames and longer exposures his image should be much, much better.
 
  • Like
Likes Tom.G
  • #21
@Andy Resnick one thing I just noticed was that you seem to be comparing the noise of the two processed images in post #12. I think this is problematic. The tone mapping process appears to alter the values of the pixels themselves (except the linear tone mapping), so trying to determine the noise from looking at their values doesn't seem right to me. Any noise measurements should be done on the raw images before/after calibration and stacking, but before any other processing is done. If you have to use tone mapping when stacking, then I'd suggest only making noise measurements on the linear mapping.

Andy Resnick said:
Let’s look at what happens, starting with the f/4 Pleiades image, since flat-field correction isn’t required. The tone map curve has a horizontal (input) axis and vertical (output) axis, and DSS allows a lot of different ways to apply a tone map. First, I’ll show the most simple tone map: linear-linear, where both axes are linear scales. Here’s the resultant linescan, when the horizontal axis has 4294967296 bins and the vertical axis has 4194304 bins:

Your first linescan is very confusing. There appears to be two 'floors' in the pixel values, creating a plateau-like shape on the linescan. I would expect a linear mapping to preserve the pixel values a lot better. Do you know more about what this tone mapping is doing? Does the conversion between 14, 16, 22, and 32 bit images appropriately scale your pixel values to preserve the dynamic range?
 
  • Like
Likes Tom.G
  • #22
davenn said:
<snip>

I would, quite safely, state that I doubt that 99.9% of astrophotographers would even contemplate most of what you have
commented on in your first post ... I, for sure, don't :wink:

Sheesh- tell me how you really feel :)

davenn said:
Why do you think I make the effort to load all my kit into the car and head to a darker site ? :smile:

If you or I really want to do serious imaging from home, deep in the sky glow of suburbia, the ONLY way to do it is to use
narrow band filtering ...

And maybe that is really the only answer here- maybe I have indeed hit the limit of what I can do. Even so, I think it's reasonable to determine if 'going pro' is indeed the only solution, since that fundamentally changes what I am doing. Right now, I can set up and start to acquire images within 10 minutes of deciding to go outside (and that includes polar aligning), and when I am done everything is put away and I am in bed within 15 minutes. That is to say, right now doing astrophotography is easy and relaxing. I hesitate to make it a giant production- there's a silver-tier dark-sky park about a hour's drive from me, making use of that greatly increases my level of pain- right now, the computer is doing most of the work.

davenn said:
Basically, if you want to do imaging from within a light polluted location, you really have no choice but to use filters
Then you can up your exposure times ( assuming your mount is a good tracking mount and well polar aligned ??)
Otherwise you are just asking DSS or other stacker and post processing program to do things it cannot do

I'm not sure the problem is DSS per se- I think if I can control/eliminate posterization and figure out flat field correction better, I'll be happy.
 
  • #23
Drakkith said:
@Andy Resnick Your first linescan is very confusing. There appears to be two 'floors' in the pixel values, creating a plateau-like shape on the linescan. I would expect a linear mapping to preserve the pixel values a lot better. Do you know more about what this tone mapping is doing? Does the conversion between 14, 16, 22, and 32 bit images appropriately scale your pixel values to preserve the dynamic range?

If I understand your question, you are referring to what I am calling posterization, or maybe quantization error, and I agree, I expected the mapping to 'not do this'. I believe (and this is just my unfounded idea) that the features you describe arise from converting the 32-bit image to a 16-bit image. But this is a guess. In any case, what you are describing is the 'fundamental problem' in my image reconstruction process, and I can't quite figure out how to get around it.
 
  • #24
Part 3: flat field correction.

Now that I’ve outlined the stacking process, let’s look at why flat field correction helps. For this, I’ll use the 400/2.8 set of images (horsehead nebula). Here’s a set of linescans of tone mapped downsampled images without flatfield correction, the first one is a linear input scale and linear tone map, the second a log-root input scale and linear tone map, and lastly the log-root Reinhardt tone map:

21e8c19f-9f8c-418f-a965-c1afa4c9a831-original.jpg


bd94b538-96da-4980-9c06-1cfc4acdad2d-original.jpg


7bffad00-9829-4a84-bf16-2caf8a7a7009-original.jpg


The posterization/quantization error is obvious, and the effect of various tone maps should be clear. Again, both the dynamic range and noise levels are highest for the nonlinear tone map, but this may not be obvious given the large nonuniform background level. To fix the nonuniform background, DSS allows the use of ‘flats’, which are star-free images that are used to make the background level spatially uniform. Ideally, this step results in a flat background.

I say ‘ideally’, because the degree of exactness required to fully flatten the background has, so far, been beyond my ability. There are hints and tips everywhere about how to obtain ‘good’ flats, none of those really worked for me. Flat field correction gets easier the higher the f-number is and low f-number imaging is far more difficult to correct. In the end, I took a series of flats at varying f-number by imaging a white LCD display, and then made combinations of various flats, so that in the end, I have about 6 different flats that I can try when imaging at 400/2.8 (in total, I have about 30 flats for 5 or 6 imaging configurations). Selection of a particular flat is largely trial-and-error, and I have none that attempt to correct nonaxisymmetric background levels.

Here is what the flat-field corrected 32-bit stacked image histogram looks like in DSS: nothing at all like what you will find online:

65dce22a-fb53-40e0-ba11-e17bfcaf001d-original.jpg


Here are example linescans of the impact of a ‘first guess’ flat on the 16-bit downsampled stacked images, presented in the same order as above:

9b7f544b-4cab-4302-b6fb-af0a0a70b845-original.jpg


0452490d-b4a8-45de-b89f-7df59ec1274f-original.jpg


898d5ce4-94c0-4348-acbb-b051eed1c07a-original.jpg


The first two scans appear to show near complete correction, however the extreme tone stretching that occurs with a Reinhardt tone map shows that the residual background is not sufficiently uniform. In fact, the shape of the residual background indicates that the selected flat over-corrected the brightfield images. From doing this for a year, my experience tells me what the next-guess flat should be, with the following result:

df4ce46f-b3db-45e3-ac61-5cd536918c67-original.jpg


This is clearly a significant improvement. But for perspective, let’s look at the actual differences in those two flat images- I computed the (16-bit valued) difference between the two different flats and here’s the linescan through the resultant:

d363c56a-8149-4fe5-bc5b-6e8b81781739-original.jpg


Look at the vertical scale- at most, the difference is about 55 grey values out of 65535, a difference of 0.08%. Out in the wings, where most of the improvement occurred, the difference is closer to 0.01%. That’s just crazy, IMO, and demonstrates how nonlinear my tone mapping really is.

The residual non-flat background creates problems when subtracting the background in post-processing, which is my next step. As I will show, if the posterization/quantization problem is fixed, there will be a large improvement in my ability to subtract the background.
 

Attachments

  • bd94b538-96da-4980-9c06-1cfc4acdad2d-original.jpg
    bd94b538-96da-4980-9c06-1cfc4acdad2d-original.jpg
    23 KB · Views: 504
  • 21e8c19f-9f8c-418f-a965-c1afa4c9a831-original.jpg
    21e8c19f-9f8c-418f-a965-c1afa4c9a831-original.jpg
    23.9 KB · Views: 496
  • 7bffad00-9829-4a84-bf16-2caf8a7a7009-original.jpg
    7bffad00-9829-4a84-bf16-2caf8a7a7009-original.jpg
    27.9 KB · Views: 543
  • 9b7f544b-4cab-4302-b6fb-af0a0a70b845-original.jpg
    9b7f544b-4cab-4302-b6fb-af0a0a70b845-original.jpg
    23.9 KB · Views: 528
  • 0452490d-b4a8-45de-b89f-7df59ec1274f-original.jpg
    0452490d-b4a8-45de-b89f-7df59ec1274f-original.jpg
    22.1 KB · Views: 481
  • d363c56a-8149-4fe5-bc5b-6e8b81781739-original.jpg
    d363c56a-8149-4fe5-bc5b-6e8b81781739-original.jpg
    43.4 KB · Views: 514
  • df4ce46f-b3db-45e3-ac61-5cd536918c67-original.jpg
    df4ce46f-b3db-45e3-ac61-5cd536918c67-original.jpg
    30 KB · Views: 503
  • 898d5ce4-94c0-4348-acbb-b051eed1c07a-original.jpg
    898d5ce4-94c0-4348-acbb-b051eed1c07a-original.jpg
    31.6 KB · Views: 481
  • 65dce22a-fb53-40e0-ba11-e17bfcaf001d-original.jpg
    65dce22a-fb53-40e0-ba11-e17bfcaf001d-original.jpg
    11.6 KB · Views: 443
Last edited:
  • #25
Flats are not designed to correct for the asymmetric skyglow you get when imaging near urban areas, and I don't recommend trying to adapt them to do so. Flats should only be used to correct for vignetting, dust on the optics, and variations in your sensor's response over its pixel field.

On my phone right now, so I can't go into more detail at this time.
 
  • Like
Likes russ_watters
  • #26
@Andy Resnick unfortunately my processing software crashes when I attempt to load your photo. It's an old program, so it doesn't really surprise me. I can't even get it to run on a newer PC. My idea was to try what's known in my software as a median rank process to extract the large scale structure from the image (the skyglow) and then subtract it from the original image.
 
  • #27
@Andy Resnick For background subtraction can you take a grossly out-of-focus image of your target and then subtract that from a "good," in-focus image? You may have to play with the amplitude scaling of the two images for best cancellation.

Cheers,
Tom
 
  • #28
Tom.G said:
@Andy Resnick For background subtraction can you take a grossly out-of-focus image of your target and then subtract that from a "good," in-focus image? You may have to play with the amplitude scaling of the two images for best cancellation.

Cheers,
Tom

If there are only stars in the images, this can work well. However, when there are extended nebula/dust cloud features that are of the same approximate size as the background gradient, this doesn't work well. I should be able to show this shortly.
 
Last edited:
  • #29
Drakkith said:
@Andy Resnick unfortunately my processing software crashes when I attempt to load your photo. It's an old program, so it doesn't really surprise me. I can't even get it to run on a newer PC. My idea was to try what's known in my software as a median rank process to extract the large scale structure from the image (the skyglow) and then subtract it from the original image.

Yep...
 
  • #30
Update:
Tom.G's suggestion about using 16-bit brights proved useful; I don't have the linescans to show this, but based on the results, there is evidence that DSS wasn't handling the RAW images correctly. There are some RAW settings in DSS I can play with- stacking the 16-bit images invalidated the flat frames- but I'm hoping to show some dramatic improvement within a few days.

Progress?
 
  • #31
What do you mean stacking invalidated the flats?
 
  • Like
Likes davenn
  • #32
Andy Resnick said:
Update:...
- stacking the 16-bit images invalidated the flat frames- but I'm hoping to show some dramatic improvement within a few days.. ...Progress?

Drakkith said:
What do you mean stacking invalidated the flats?
yup, didn't make sense to me either
The only thing I could think of that would do that is if your flats were taken differently than your lights
... different exposure time, focal length, ISO setting etc

Flats all need to be done in the same way as your lights, otherwise DSS will reject them
 
  • #33
Hmmm. That's odd if it rejects them if they are different exposure times. Flats are almost never the same exposure time as your lights.
 
  • #34
Now I have some data supporting my hypothesis that DSS wasn't interpreting the RAW data files correctly (thanks to Tom.G!). Here are two linescans comparing the 'old' interpolation method ('Bilinear Interpolation') and the new, *correct* interpolation method ('Adaptive Homogeneity-Directed (AHD) Interpolation'). No flats were used during stacking- these images were acquired at f/4.

f593f589-4844-490f-b5c8-21ab7734ff7d-original.jpg


911d899b-bd1d-4698-85fb-92a812739951-original.jpg


The stairstep effect is completely gone now (the background is a lot flatter, as well). I used the original RAW files, not 16-bit TIF images. The DSS 32-bit histogram looks identical- this problem was really subtle...

Ok- regarding my comment about 'stacking invalidated the flats'. Last year (or so), when I got more serious about flat field correction, DSS generated a whole range of Master Flat 16-bit TIF files and then I deleted all the original RAW files- I figured, why keep 100GB of files when I don't need them anymore? What I didn't know is that the Bilinear TIF interpolation from DSS and TIF interpolation algorithm from Nikon's Capture NX program created differently-sized pixel arrays. Consequently, DSS simply ignored the flat file when stacking 16-bit TIFs.

Worth noting, once you have 16-bit Master Flats, you can manipulate them to improve performance- that's how I generated multiple master flats to eek out that final 0.01% difference: for example, I generated a series of mixed Master f/2.8 and a Master f/3.5 in different proportions (75/25, 50/50, 25/75). The RAW flats do not have to have the same image parameters as the bright images- however, all the RAW flats have to have the *same* image parameters (I can't mix and match different ISOs, for example).

So, I'm happy to report that the major problem has been solved (thanks, everyone!). Since I'm teaching QM this semester, I'm calling this 'first quantization correction'. Last night, I took some new RAW flat images and today will generate new Master Flats with the AHD interpolation method ('second quantization') and see how that impacts my low f-number images- presumably I won't have to contort the flats so much anymore. I'll post results in the usual thread rather than this one.
 

Attachments

  • f593f589-4844-490f-b5c8-21ab7734ff7d-original.jpg
    f593f589-4844-490f-b5c8-21ab7734ff7d-original.jpg
    25.5 KB · Views: 469
  • 911d899b-bd1d-4698-85fb-92a812739951-original.jpg
    911d899b-bd1d-4698-85fb-92a812739951-original.jpg
    30.1 KB · Views: 455
  • Like
Likes Drakkith and davenn
  • #35
Andy Resnick said:
I'll post results in the usual thread rather than this one.
Usual? What usual? Enlighten us. I sure don't want to miss those new results!

Cheers,
Tom
 
<h2>1. What is Astro Image Stacking?</h2><p>Astro Image Stacking is a technique used in astrophotography to combine multiple images of the same object taken at different times or with different settings, in order to improve the overall quality and reduce noise in the final image.</p><h2>2. How does Astro Image Stacking work?</h2><p>Astro Image Stacking software, such as DeepSkyStacker, aligns and combines the individual images, taking into account the movements of the stars and any changes in the camera's settings. This results in a single, high-quality image with a lower noise level.</p><h2>3. What is the benefit of using Astro Image Stacking?</h2><p>Astro Image Stacking allows for a cleaner and more detailed final image, as the noise is reduced and more light is captured from the object. It also helps to bring out faint details that may not be visible in a single exposure.</p><h2>4. What is DSS (Deep Sky Stacker)?</h2><p>DSS is a popular software used for Astro Image Stacking. It is a free, user-friendly program that allows for the alignment and stacking of images, as well as post-processing to enhance the final image.</p><h2>5. What are some tips for optimizing DSS image stacking and post-processing?</h2><p>Some tips for optimizing DSS image stacking and post-processing include selecting a good set of images with minimal noise, making sure the images are properly aligned, and experimenting with different settings and techniques to find what works best for your specific image and equipment. It is also helpful to use dark frames and flat frames to further improve the quality of the final image.</p>

1. What is Astro Image Stacking?

Astro Image Stacking is a technique used in astrophotography to combine multiple images of the same object taken at different times or with different settings, in order to improve the overall quality and reduce noise in the final image.

2. How does Astro Image Stacking work?

Astro Image Stacking software, such as DeepSkyStacker, aligns and combines the individual images, taking into account the movements of the stars and any changes in the camera's settings. This results in a single, high-quality image with a lower noise level.

3. What is the benefit of using Astro Image Stacking?

Astro Image Stacking allows for a cleaner and more detailed final image, as the noise is reduced and more light is captured from the object. It also helps to bring out faint details that may not be visible in a single exposure.

4. What is DSS (Deep Sky Stacker)?

DSS is a popular software used for Astro Image Stacking. It is a free, user-friendly program that allows for the alignment and stacking of images, as well as post-processing to enhance the final image.

5. What are some tips for optimizing DSS image stacking and post-processing?

Some tips for optimizing DSS image stacking and post-processing include selecting a good set of images with minimal noise, making sure the images are properly aligned, and experimenting with different settings and techniques to find what works best for your specific image and equipment. It is also helpful to use dark frames and flat frames to further improve the quality of the final image.

Similar threads

  • Astronomy and Astrophysics
Replies
3
Views
2K
  • Astronomy and Astrophysics
7
Replies
226
Views
10K
  • Astronomy and Astrophysics
2
Replies
39
Views
5K
  • Astronomy and Astrophysics
Replies
5
Views
1K
  • Astronomy and Astrophysics
Replies
14
Views
342
  • Astronomy and Astrophysics
Replies
1
Views
4K
  • Astronomy and Astrophysics
Replies
26
Views
2K
  • Other Physics Topics
Replies
16
Views
3K
Replies
9
Views
1K
  • General Engineering
Replies
2
Views
3K
Back
Top