Exposures & Stacking for DSLR Sky Photos

  • Stargazing
  • Thread starter sophiecentaur
  • Start date
In summary, the conversation discusses a problem with dynamic range in DSLR pictures of the sky and the limitations of a 256-level contrast range. The speaker suggests that to capture faint objects while still retaining details in brighter areas, a stacking process can be used to combine multiple images with different exposure times. It is also mentioned that a sensor with a logarithmic response could potentially improve the situation. The conversation ends with a discussion on the capabilities of DSLR sensors and the potential for using different exposure times to capture a wider dynamic range.
  • #1
sophiecentaur
Science Advisor
Gold Member
28,980
6,905
I have a big problem with dynamic range in my DSLR pictures of the sky. The contrast range of pictures on a DSLR is 256 levels. That's a range of around 6 magnitudes. To look at very faint objects, I have to expose a picture to place a faint object so that it is a reasonable number of levels above black, if the statistics of stacking are going to help. Any picture - particularly a wide field picture - is going to include mag 2 or 3 stars in it, which are going to burn out. The stacking process can yield a bigger contrast range if it gives a 16bit image but how to get rid of the gross white blobs? Do I really just have to edit them out and insert those stars from a lower exposed image? I guess the answer has to be Yes. But some of those bright stars are bang in the middle of some Nebulae. ? What must I do to make the resulting picture look like 'the truth'?
 
Astronomy news on Phys.org
  • #2
sophiecentaur said:
The stacking process can yield a bigger contrast range if it gives a 16bit image but how to get rid of the gross white blobs? Do I really just have to edit them out and insert those stars from a lower exposed image? I guess the answer has to be Yes. But some of those bright stars are bang in the middle of some Nebulae. ? What must I do to make the resulting picture look like 'the truth'?

The only way to deal with this that I know of is to do exactly what you're already doing. As long as you're not burning out your pixel counts (maxing them out) then you can edit the image to bring the brightness of the stars down and keep the surrounding details. If your exposures are too long and your maxing your pixels counts around these stars then all that detail is lost forever. Maybe @russ_watters knows a better way.
 
  • #3
Drakkith said:
The only way to deal with this that I know of is to do exactly what you're already doing. As long as you're not burning out your pixel counts (maxing them out) then you can edit the image to bring the brightness of the stars down and keep the surrounding details. If your exposures are too long and your maxing your pixels counts around these stars then all that detail is lost forever. Maybe @russ_watters knows a better way.
You mean 'Curves'? But that's too late to deal with the real problem, afaics. For the brighter stars, the image gets bigger, proportional with to the brightness (the sinx/x curve gets clipped further and further down plus whatever happens 'electronically' on the sensor. The more of that you're prepared to put up with, the dimmer the wanted object that can be recorded. The basic limit of 255 levels is there always (at least, on a dslr). I guess what's needed is a sensor with a logarithmic response. I imagine someone's going to tell me that there is one.
 
  • #4
sophiecentaur said:
You mean 'Curves'? But that's too late to deal with the real problem, afaics. For the brighter stars, the image gets bigger, proportional with to the brightness (the sinx/x curve gets clipped further and further down plus whatever happens 'electronically' on the sensor. The more of that you're prepared to put up with, the dimmer the wanted object that can be recorded.

In a single exposure perhaps. But if you're stacking you can put together any number of images to bring out the finer details without burning out your image around the bright stars. If you can see this https://scontent.xx.fbcdn.net/v/t31.0-8/857423_470714832982111_1578199853_o.jpg?oh=7a35ef672dc7d72fe9991c6f9bdad4e7&oe=590F5E58, there's a very bright star (Eta Carinae) just above the 'corner' formed by the dust. I was able to bring out the details surrounding the star by altering the brightness and contrast curves and/or performing some other digital processing. Otherwise it would have been just a huge bright spot.

sophiecentaur said:
I guess what's needed is a sensor with a logarithmic response. I imagine someone's going to tell me that there is one.

As far as I know, you want your camera to have a linear response across most of its range. But I don't use a DSLR, so I don't know if things are different there.
 
  • #5
The main problem with star colors in wide angle astrophotograhy is the amazing light gathering capability of a fast and wide lens combined with a digital sensor. Combined with the fact that all the light from a star is concentrated on one or just a few pixels will result that a bright star will blow out those pixels in seconds. It gets worse since most affordable digital sensors has small pixels with shallow electron wells so they saturate pretty quickly.

sophiecentaur said:
The basic limit of 255 levels is there always (at least, on a dslr).
Any decent DSLR should be able to output a 12 or 14 bit RAW image.

I guess what's needed is a sensor with a logarithmic response. I imagine someone's going to tell me that there is one.
The closest thing available is old school photographic film. It has an S-shaped response cure but it still not really enough to control star color in wide angle photography.
 
  • #6
The traditional method of controlling the stars in an image with a large dynamic range is to take images with two or more different exposure times and then combine them. This can be done in the image editing software or sometimes directly in the stacking software. Many of the best images of the Orion nebula are made from combining a series of 5 sec, 30 sec and 5 min exposures or similar.
 
  • #7
glappkaeft said:
Any decent DSLR should be able to output a 12 or 14 bit RAW image.
That's an interesting comment and it could explain quite a lot. My K10D has files of about 16MB (10 MPxel DNG) and my K2S, around 20MB (20MPxel DNG), which implies that it's not just 8bits 'per pixel' - or any simple relationship. The K2S is pretty up to date so I'd expect something near optimum.
Do you have a reference about where the bit depth comes from in typical coding?
 
  • #8
glappkaeft said:
The traditional method of controlling the stars in an image with a large dynamic range is to take images with two or more different exposure times and then combine them. This can be done in the image editing software or sometimes directly in the stacking software. Many of the best images of the Orion nebula are made from combining a series of 5 sec, 30 sec and 5 min exposures or similar.
So you're saying that TIFF files of different brightnesses (more bits per pixel) are easier to combine an produce a bigger contrast ratio. That makes sense but I'd need some play time to do it convincingly.
I have just ordered Make Every Photon Count and will devour it when it arrives. It comes highly recommended.
 
  • #9
sophiecentaur said:
So you're saying that TIFF files of different brightnesses (more bits per pixel)

Brightness is essentially photons per pixel (or electrons per pixel, or counts per pixel), not bits per pixel.

sophiecentaur said:
I have just ordered Make Every Photon Count and will devour it when it arrives. It comes highly recommended.

I've not heard of this before. I'll have to look into it. :biggrin:
 
  • #10
sophiecentaur said:
That's an interesting comment and it could explain quite a lot. My K10D has files of about 16MB (10 MPxel DNG) and my K2S, around 20MB (20MPxel DNG), which implies that it's not just 8bits 'per pixel' - or any simple relationship. The K2S is pretty up to date so I'd expect something near optimum.
Looking at the specs it should be a 12 bit monochrome RAW format stored using lossless compression.
Do you have a reference about where the bit depth comes from in typical coding?
There are many, but they are usually quite wordy and tailored for CCD cameras. Basically a pixel on a digital camera (CMOS/CCD) is a tiny solar cell that can store a number of electrons (AFAIK 1 electron per detected photon). The number of stored electrons is called the well depth and depending on the technology used and the size of the pixel this number is in the 1000s to 100'000s range. If the well is full no more data can be captured and the pixel is saturated/blown out.

When the pixel is read the voltage from the electrons in the well is amplified and measured by an ADC (analog-digital converter). The ADC outputs a digital number (ADU) of N bits (22 bits for you K2S which is odd choice since it is much larger than necessary). Each digital unit (ADU) can be converted back to electrons by multiplying it with the gain of the ADC (electrons/ADU).

This data must be stored. In a camera made especially for astronomy or other scientific imaging this just means that the information from the ADC is stored as is. For a DSLR camera the situation is different. Even the RAW images are somewhat processed using propitiatory algorithms and then stored using whatever bit depth is used in the RAW format (for both your cameras, 12 bit). This is still mostly a number proportional to the number of photons detected per pixel. Color conversion is actually done later in the image processing/stacking software, for a DSLR that process is called debayering (from the Bayer mask applied in front of the sensor to be able to reconstruct a color image).

For a JPG a lot more processing is done, debayering to get a color image, processing with gamma curves, sharpening, noise reduction, etc and the result is then compressed to 8 bit JPG (this is why you should not use JPG for processing if you are into serious astroimaging since you want to stack the linear RAW files and then control this process yourself).
 
  • #11
sophiecentaur said:
So you're saying that TIFF files of different brightnesses (more bits per pixel) are easier to combine an produce a bigger contrast ratio. That makes sense but I'd need some play time to do it convincingly.

don't convert to TIFF and then stack/edit ... keep them as a RAW and do all processing ... stacking/editing

and also as glappkaeft said

glappkaeft said:
For a JPG a lot more processing is done, debayering to get a color image, processing with gamma curves, sharpening, noise reduction, etc and the result is then compressed to 8 bit JPG (this is why you should not use JPG for processing if you are into serious astro-imaging since you want to stack the linear RAW files and then control this process yourself).

NEVER convert to jpg before stacking and editingDave
 
  • #12
It's a good idea and saves time to level out your dark frames balance before stacking images.
 
  • #13
Drakkith said:
not bits per pixel.
I was referring to the number of levels that the ADC can resolve for each sensor element. For a monochrome camera that would be more easily related to a final file size - if there were no compression.
 
  • #14
davenn said:
NEVER convert to jpg before stacking and editing
Absolutely. I would never ever ever ever do that - even with happy family snaps.

davenn said:
don't convert to TIFF and then stack/edit ... keep them as a RAW and do all processing ... stacking/editing
OK that could make sense. I went into TIFF because N4 didn't seem capable of producing a proper colour image from my RAW files. I assumed that it wasn't making sense of the file metadata to do the right debayering. But I do have a query about what you say. If N4 shifts images before stacking then how can one be sure that the correct pixels (on the bayer matrix) will get added together. If you take the un-de-bayered file, they just appear as an array of little squares. Does the shifting take this into account? (I already have doubts about the debayering from my Pentax files. (All other photo software gets it right) Is anything lost in going from RAW to the non-compressed TIFF conversion?
 
  • Like
Likes davenn
  • #15
sophiecentaur said:
I went into TIFF because N4 didn't seem capable of producing a proper colour image from my RAW files.

ahhhh OK ... N4 ( I assume that is Nebulosity4) I haven't really played with that prog much other than dabbled with the trial version ... price to buy was a little steep for me ... Maybe it doesn't handle the Pentax .DNG files ?

sophiecentaur said:
Is anything lost in going from RAW to the non-compressed TIFF conversion?

There isn't any real compression in TIFF files ... but you end up with a file that isn't as editable as the original RAW file, as things like colour balance, white balance and a few other things are fixed at the time of conversion from RAW to TIFF. You just don't get the introduced artifacts caused by the significant compressions when going from RAW to JPGDave
 
  • #16
davenn said:
ahhhh OK ... N4 ( I assume that is Nebulosity4) I haven't really played with that prog much other than dabbled with the trial version ... price to buy was a little steep for me ... Maybe it doesn't handle the Pentax .DNG files ?
There isn't any real compression in TIFF files ... but you end up with a file that isn't as editable as the original RAW file, as things like colour balance, white balance and a few other things are fixed at the time of conversion from RAW to TIFF. You just don't get the introduced artifacts caused by the significant compressions when going from RAW to JPGDave
The TIFF files are massive; just 16bit RGB, I think. N4 wouldn't even look at my PEF raw format files so I just use DNG, the more generic system. I am surprised just what N4 seems to do with them because PS and other processing packages make perfect sense of them
Have you a comment about the effect of moving the images about for stacking? Are the quanta of movement bigger than the sensor element spacing then?

Anyway, when you take a number of different exposure images, how do you fit them together? I suppose PS Masks could allow the 'unburnt' star images in the less exposed images to replace the burnt ones but the unwanted ones are bigger so do you have to do some feathered selection business? Then the background could look funny around the doctored bright stars. I have seen some clever images of a full moon with Saturn peeking around from behind it. That must require quite a bit of jiggery pokery, I imagine.
 
  • #17
BTW I just got that book in the post. It seems to cover quite a lot of what I need with actual pictures of the various setups the guy has used. The downside is £££££, though. haha
Must finish my work in the garden before I get stuck into it.
 
  • #18
davenn said:
There isn't any real compression in TIFF files
There is an option for TIFF compression in my Aperture (OS X)
 
  • #19
Drakkith said:
The only way to deal with this that I know of is to do exactly what you're already doing. As long as you're not burning out your pixel counts (maxing them out) then you can edit the image to bring the brightness of the stars down and keep the surrounding details. If your exposures are too long and your maxing your pixels counts around these stars then all that detail is lost forever. Maybe @russ_watters knows a better way.
Not much beyond what has already been said;
No matter what, you need better than 8bit color depth. You lose way too much with such a low depth. Some of the best parts of my pictures you can't even see until you stretch them. Higher bit depth on the originals and shorter (or different length) exposures makes for higher overall dynamic range. I've had trouble working with different exposure lengths though, specifically because it is hard to overlay them when the stars are different sizes.

Also, there are software tools such as photoshop actions to shrink blown-out stars, but often I think they add artistic flair and I tend to leave them.
 
  • Like
Likes davenn
  • #20
sophiecentaur said:
I have seen some clever images of a full moon with Saturn peeking around from behind it. That must require quite a bit of jiggery pokery, I imagine.
My suspicion is that such photos are composites of separate exposures/processes, combined together after the fact. That is how I sometimes do planets with moons. I literally just cut and paste the processed photo of the planet into the processed photo of the moons.
 
  • #21
russ_watters said:
Not much beyond what has already been said;
No matter what, you need better than 8bit color depth. You lose way too much with such a low depth. Some of the best parts of my pictures you can't even see until you stretch them. Higher bit depth on the originals and shorter (or different length) exposures makes for higher overall dynamic range. I've had trouble working with different exposure lengths though, specifically because it is hard to overlay them when the stars are different sizes.

Also, there are software tools such as photoshop actions to shrink blown-out stars, but often I think they add artistic flair and I tend to leave them.
The 8bit contrast range was a misconception of mine. I should have realized that a DSLR does better than that!
Pleased (in a way) that the multiple exposure times exercise is not straightforward. There is always the fundamental limitation of the display. That can be as high as 1000:1 (claimed) but that could be just sellers hype. In the case of AP images, I would think it's quite acceptable to fool around with curves (in highlights and blacks too) to reduce the contrast so everything is visible. Let's face it, with the help of a camera, you can see stuff that your eyes (even with a telescope) could never reveal, even under ideal (non-UK) viewing conditions.
 
  • #22
russ_watters said:
My suspicion is that such photos are composites of separate exposures/processes, combined together after the fact. That is how I sometimes do planets with moons. I literally just cut and paste the processed photo of the planet into the processed photo of the moons.
You can put fairies at the bottom of your garden too, that way. :wink:
 
  • #23
sophiecentaur said:
You can put fairies at the bottom of your garden too, that way. :wink:
Different "purity" standards, I guess. I'm not one who believes that photos can't be heavily processed: technology is a good thing and should be used to enhance our experiences and not seen as poisoning them. Where I draw the line is adding things that are not actually there. To me, combining the best parts of two real photos of the same scene isn't adding things that aren't there.

But then, I also favor the instant replay in baseball...
 
  • Like
Likes sophiecentaur
  • #24
russ_watters said:
My suspicion is that such photos are composites of separate exposures/processes, combined together after the fact.

tho, I'm sure some do that just to try and say hey look what I captured ... it can easily be achieved with a single exposure ... you just have to be in the right spot to get the start or end of the occultation
 
Last edited:
  • #25
davenn said:
tho, I'm sure some do that just to try and say hey look what I captured ... it can easily be achieved with a single exposure ... you just have to be in the right spot to get the start or end of the occultation
I tried oncell and had trouble with the large difference in brightness.
 
  • #26
would be tricky if a full moon ( or close to)

early evening occultations would be the best times and during smaller phases of the moon ... thin crescent to first quarter
It's the best time ( during twilight) to photo the moon and the brighter planets before brightness of either becomes a problem
 

What is exposure in DSLR photography?

Exposure in DSLR photography refers to the amount of light that enters the camera and hits the camera's sensor. It is controlled by adjusting the shutter speed, aperture, and ISO. A well-exposed photo has a balance of these three elements.

What is the importance of exposure in taking sky photos with a DSLR?

Exposure is crucial in taking sky photos with a DSLR because the sky is generally much brighter than the foreground. A properly exposed photo will capture the details and colors of the sky while also preserving the details and colors in the foreground.

What is stacking in DSLR sky photography?

Stacking is a technique used in DSLR sky photography where multiple exposures of the same scene are combined to create a final image with better details and less noise. This is especially useful when taking photos of the night sky, as it allows for longer exposure times without creating too much noise.

How do I determine the correct exposure settings for sky photos with a DSLR?

The best way to determine the correct exposure settings for sky photos with a DSLR is to use the camera's histogram. This will show you the distribution of light in the photo and help you adjust your settings to achieve a well-exposed image. Additionally, you can use the exposure triangle and adjust the shutter speed, aperture, and ISO to find the right balance for the scene.

What are some tips for successfully stacking sky photos with a DSLR?

Some tips for successfully stacking sky photos with a DSLR include using a tripod to keep the camera steady, shooting in RAW format for better image quality, taking multiple exposures with different settings, and using post-processing software to combine the images. It is also important to have a clear and dark sky to minimize noise in the final image.

Similar threads

  • Astronomy and Astrophysics
Replies
3
Views
2K
  • Astronomy and Astrophysics
7
Replies
226
Views
11K
  • Astronomy and Astrophysics
Replies
28
Views
4K
  • Astronomy and Astrophysics
2
Replies
43
Views
10K
  • Astronomy and Astrophysics
Replies
1
Views
892
  • Astronomy and Astrophysics
2
Replies
39
Views
5K
  • Astronomy and Astrophysics
Replies
1
Views
4K
  • Astronomy and Astrophysics
Replies
5
Views
2K
  • Astronomy and Astrophysics
Replies
10
Views
1K
Replies
5
Views
939
Back
Top