Deep Space Imaging and stacking images for less noise

In summary, stacking individual exposures can greatly improve the signal to noise ratio and overall detail in astrophotography. There are various software options available for stacking, ranging from free to expensive. These include Deep Sky Stacker, Photoshop CC, Nebulosity, and Pixinsight. Learning the art of stretching the RGB color curves is important in bringing out all the details in the final image. Additionally, white balance can be a challenging aspect of stacking and may require some experimentation with color temperature settings. It is also recommended to use RAW image files for stacking.
  • #1
davenn
Science Advisor
Gold Member
2023 Award
9,590
10,269
hi guys
Today I would like to show the advantages of stacking a set of exposures over just a single shot
There are various ways to stack individual exposures. A number of programs are available ranging
from free to quite expensive ... several 100 US$
free = DSS - Deep Sky Stacker
http://deepskystacker.free.fr/english/download.htm

Photoshop CC = PS and LR, (Lightroom) CC come as a subscription these days and quite affordable
So not only do you get stacking features in PS, you also have very powerful editing software

a little more expensive :wink: = Nebulosity
http://www.stark-labs.com/nebulosity.html

most expensive = Pixinsight around US$250
https://pixinsight.com/

Both Pixinsight and Nebulosity have reasonably steep learning curves
I have dabbled with both and in the end gone back to Deep Sky Stacker
I have done a couple of stacks in PS, They were sort of OK
The trick with all the software is learning the art of stretching the RGB colour curves to bring out all the details. Something I am still learning

Lets take a practical example and see the difference between stacking and a single image.
One of the major purposes of doing this is to improve the signal to noise ratio. The noise is
primarily sensor thermal noise that starts to come through because there isn't enough signal noise to overcome it because we are imaging in very low light and usually high ISO settings > 1000.
Because noise is random in its appearance and location in the exposure, but the signal -- the stars nebula etc isn't. The signal level is compounded and improved where the noise tends to cancel out.

This first image is a single 30 sec exposure with a 400mm lens at f5.6 and ISO 3200
The camera is a Canon 700D that I have modded by removing one of the filter elements
that limits IR/UV and colour range ... it improves the sensitivity to the red end of the spectrum

the 2 main objects are ... to the left M20 - the Trifid Nebula and to the right M8 - the Lagoon Nebula

IMG_0139sm.jpg


You can see there is a bit of a pinkish hue to the image as a result of the extra red sensitivity

now here is 6 separate exposures ( all same settings) taken one after the other and stacked in DSS

2016-05 M20 and M8sm.jpg


The result is glaringly obvious. I have been able to keep the red hue under control. Have been able to
achieve a much better contrast between the nebulae, stars and the dark background. Also, if you look closely, you can see a definite loss of the fuzzy noise across the whole image ( the noise).
The overall result is much better detail in the imageThere is some initial comments
I will address to more stacking details in another post in this thread

Other questions and comments are welcome

cheers
Dave
 
  • Like
Likes 1oldman2, DennisN and ProfuselyQuarky
Astronomy news on Phys.org
  • #2
I'm currently struggling with white balance/colorimetry in my stacks and would appreciate any tips. Stacking images tends to result in desaturation, and I can't seem to create a robust post-processing algorithm to restore proper colors. Any tips would be helpful...
 
  • #3
Andy Resnick said:
I'm currently struggling with white balance/colorimetry in my stacks and would appreciate any tips.

WB is difficult. Generally I'm finding I'm looking at star colour to see if I have an overall colour range of the stars is even
This is mostly done with small adjustments of the colour temp setting

Before we go further, what stacking and editing software are you using ?Dave
 
  • #4
davenn said:
Before we go further, what stacking and editing software are you using ?

The free stuff: Deep Sky Survey for stacking and 32-to-16-bit 'mixdown', then ImageJ for everything else. I'm getting better with the color balance (see recent post), but it's all ad hoc and so my panoramas don't match very well.
 
  • #5
Very cool, Dave.
 
  • Like
Likes davenn
  • #6
I use Registax, though it is no longer being updated and has some issues with large files (if you are using video...).

Nice pic, btw.
 
  • Like
Likes davenn
  • #7
Andy Resnick said:
I'm currently struggling with white balance/colorimetry in my stacks and would appreciate any tips. Stacking images tends to result in desaturation, and I can't seem to create a robust post-processing algorithm to restore proper colors. Any tips would be helpful...
When you stack, you are left with a grey background due to your skyglow (if that's what you are referring to...). You can use the "levels" function in photoshop to just cut off below a certain grey level, making the background black.
 
  • #8
russ_watters said:
I use Registax, though it is no longer being updated and has some issues with large files (if you are using video...).

Nice pic, btw.

Registax is great for planetary image stacking, it's specifically designed for loading 100's to 1000's of video image frames into
and generating good sharp images of planets ... its huge popularity attests to its capability of doing this well :smile:
I personally haven't played with it, tho have seen the good results of those that have
it's amazing what can be done with a telescope and a webcam ( or similar small camera)
many astro shops have low cost cameras for this process as well
eg ...
http://www.bintel.com.au/Astrophotography/CCD-cameras/Orion-StarShoot-USB-Eyepiece-Camera-II/1487/productview.aspxD
 
Last edited:
  • #9
Andy Resnick said:
The free stuff: Deep Sky Survey for stacking and 32-to-16-bit 'mixdown', then ImageJ for everything else. I'm getting better with the color balance (see recent post), but it's all ad hoc and so my panoramas don't match very well.

Deep Sky Stacker works very well with the stacking process. Haven't heard of ImageJ image processing software

yes, I could see the colour balancing problems you are having. Really shouldn't be that green huh :wink:
the galaxy and all your stars are a bit off-colour ... experiment with the colour temperature settings a bit
be careful not to push the vibrance or saturation ...

58m_filtered-2_zpsbgbwkqwd.jpg


Andy Resnick said:
I'm currently struggling with white balance/colorimetry in my stacks and would appreciate any tips. Stacking images tends to result in desaturation, and I can't seem to create a robust post-processing algorithm to restore proper colours. Any tips would be helpful...

yes, it's natural for the stacked set to look very bland and washed out ... don't worry about that
Now I assume you are stacking RAW image files from your camera ? if not you should be
( don't use jpg's ... but you are probably aware of that :wink: ... just checking )

here's the 5 image stack if that above image before being exported to Lightroom
as you can see it looks pretty blah and it has more detail than some stacks do

Clipboard01.jpg


99% of us don't do colour stretching in DSS either, Rather save the tiff file and open it in your fav image editor
save it like this ...

DSS info1.JPG
time for bed ... hopefully tomorrow nite I can delve into the use of DSS a little more and give links to some good tutorials

Dave
 
Last edited:
  • #10
russ_watters said:
When you stack, you are left with a grey background due to your skyglow (if that's what you are referring to...). You can use the "levels" function in photoshop to just cut off below a certain grey level, making the background black.

Kinda- the skyglow is not grey, it's reddish (from light pollution) and varies from night to night. My flats, on the other hand, are bluish since I acquired them on an overcast night for better uniformity. In the end, my stacked histograms are not even Gaussian/Poissonian but can exhibit strong skewness. What I end up doing is cranking up the saturation to help me fine-tune the color balance on the background (ideally) to a neutral grey, then relaxing the saturation to get the last bit.

The flat doesn't totally even out the background, either- it gets close, but not close enough to do a simple threshold.

What I should do is post some screen grabs of DSS so you can see my issues... It's running now, so maybe later today.
 
  • #11
davenn said:
Deep Sky Stacker works very well with the stacking process. Haven't heard of ImageJ image processing software

ImageJ used to be called NIHimage, and Fiji is a bundled version. It's awesome. I do stack RAW images, what I should do is post some DSS screengrabs to make my 'problems' more clear.

Since I'm colorblind, I can't rely on my eyes for any color adjustments- I have to resort to 'instrument flying', as they say. I use ImageJ for all color tweaks, since it provides all the quantitation I need.
 
  • #12
Andy Resnick said:
Kinda- the skyglow is not grey, it's reddish (from light pollution) and varies from night to night. My flats, on the other hand, are bluish since I acquired them on an overcast night for better uniformity.

Not every one a dabbles in astrophotography lives or has the telescope in an area where there is significant light pollution, In fact many of us maka a large effort to avoid light pollution. ;) In that case air glow (green) and zodiacal lights (neutral) are the largest contributors to sky glow.
 
  • #13
Ok, here are some screengrabs- these are not my best, but they are not the worst, either. I don't claim to be an expert user, by the way- let me know if I'm totally off-base. Typically, after stacking I get this:

Clipboard_zps7pmnu2wi.jpg


Note the extreme slenderness (?) of the background histogram as compared to davenn's. I don't understand the broad curve (in green), the actual green histogram is lying on top of the red and blue ones. Since I can't work with 32-bit images, the first step is to compress this into 16-bits. What I do is a version of Dolby noise reduction- I compress at the high end to maximize separation between the background and faint objects:

Clipboard-1_zpssszasaxw.jpg


Some explanation is in order- there is a lot going on here. First, the obvious, is the spatial chromatism- this is (likely) caused by the chromatic differences between my flat and data images. Generally, I try and tweak the color sliders to make the bright stars white (colorless), but it's really hard to do this accurately, given the lack of fine control on the slider bars. The best I can manage will leave the center close to grey and the edges a distinct blue color.

Next is the threshold-like appearance of the luminance map. This is the intensity compression: my images often have objects spanning 10 or 11 magnitudes (or more, if there's a bright star in the field of view). I can reliably image down to magnitude 15 objects, so I want the background to be around magnitude 17 or so. That intensity range (magnitude 5 through 17) is very nearly 16-bits worth. The mapping should not be linear, but preserve the logarithmic scale as best as possible. Remember, your screen can only display 255 discrete grey values- so I still have a lot more compression to perform. After saving this 16-bit image (16 bits per channel) I import into ImageJ for further processing. To give you an idea of the SNR of this image, here's a linescan across the diagonal, where I've nicked a couple of stars and the center of M51:

Plot%20of%2058m2_zps5zhe15rx.jpg


The luminance is fairly constant across the image, which is good. ImageJ has a background subtraction routine built-in, and after that step I get this intensity profile:

Plot%20of%2058m3_zpsiinsikhm.jpg


But the image will look terrible because I haven't (yet) compressed it to 8-bits:

58m2.TIF%20RGB_zpsk56flhve.jpg


So the next compression step is similar to the 32- to 16-bit compression, but involves more tweaking and fine-tuning. I try and work with the 16-bit image stack as much as possible, but it's all squishy and I come up with different algorithms all the time, but I can eventually get something like this:

58m3.TIF%20RGB_zpszadersqc.jpg


This is about as far as I can go with ImageJ- there's some additional room for improvement (for example, I can work with a HSB stack and operate on the brightness component to further suppress background), but that's about it. There's still too much green in the midtones, but it's really hard to correct since the whites are white and the blacks are black. What's not apparent on these downsized images is the posterization that occurs way back when I compress to 16 bits per channel. I don't have any good examples to show here, but basically, what appears to be a step change of 1 or 2 grey values on the monitor actually corresponds to intensity changes of 500 or 1000 in the 16-bit image; background subtraction fails on these 'steps'.

That's the sum total of my post-processing knowledge...
 
  • #14
Andy Resnick said:
..... My flats, on the other hand, are bluish since I acquired them on an overcast night for better uniformity....
The flat doesn't totally even out the background, either- it gets close, but not close enough to do a simple threshold.

Andy
I'm interested in specifically how are you doing your "flats" and how many did you use ?

Actually taking a step back ... what camera are you using for your imaging ? a DSLR or an astrophotography specific camera ... make model of either ?
Either way, the process is basically the same

1) ensure you are taking the flats with the zoom ( focal length), aperture, focus is as for you actual imaging

then use either of these two methods...
  1. Take a clean white t-shirt. Drape it over the lens or lens hood or front of scope. Smooth it out. Shoot a few frames, rotate the shirt, shoot a few more. Obviously if you’re doing this at night, you’ll need a uniform light source – the colour temperature setting doesn’t matter much.
  2. Select a uniform white or grey display on your iPad, iPhone, Mac or laptop computer. Hold the tablet up against the lens or front of telescope – making sure the lens is completely covered by the display and take several exposures. Rotate the camera or light source to avoid hot spots.

For that image above I didn't use any darks, flats or bias frames
I occasionally use darks, I should more often as it would help remove hot pixel dots.
Generally I find the vignetting and field flattener adjustments in Lightroom solve most uneven lighting across the frame
Bias frames I have never bothered with
Andy Resnick said:
Since I'm colorblind, I can't rely on my eyes for any color adjustments- I have to resort to 'instrument flying', as they say. I use ImageJ for all color tweaks, since it provides all the quantitation I need.

OK :frown: that kinda explains the green hue to your M51 image ... must make image processing quite difficult
Actually that last image of M51 in your last post looks so much better, colour wise
 
  • #15
Andy Resnick said:
Ok, here are some screengrabs- these are not my best, but they are not the worst, either. I don't claim to be an expert user, by the way- let me know if I'm totally off-base. Typically, after stacking I get this:

OK and sometimes, before stretching you may not see much more. Again don't panic too much, don't do any processing in DSS
as you obviously started to do in the second screenshot. Rather Save the file as a 16 bit TIFF with the setting I have shown in that screenshot of my earlier post

Andy Resnick said:
Since I can't work with 32-bit images, the first step is to compress this into 16-bits.

The 32 bit files, are they coming from your camera ?

By any chance, do you have a www site that you could upload a saved 16 bit tiff from DSS to so I could download it and have a play ?

Dave
 
  • #16
Lots of questions...

davenn said:
I'm interested in specifically how are you doing your "flats" and how many did you use ?

Good question- I've been playing with flat frames for a while, trying to get good results. I have to use flat field correction, otherwise I give up the outer 30% of my frame. I have 3 sets of flats, 1 set when I image at 800mm and 2 when I image at 400mm, each set of flats is around 40 images. The reason I have 2 sets at 400mm is because one set undercorrects and the other overcorrects- when I average the two stacked results, the background is *significantly* easier to deal with. Yes, yes I know to take the flats with the same aperture settings, etc. I've tried with all kinds of light sources as well.

davenn said:
Actually taking a step back ... what camera are you using for your imaging ? a DSLR or an astrophotography specific camera ... make model of either ?

I'm using a Nikon D810 (not D810A). The lens is mounted onto my motorized tripod.

davenn said:
Select a uniform white or grey display on your iPad, iPhone, Mac or laptop computer. Hold the tablet up against the lens or front of telescope – making sure the lens is completely covered by the display and take several exposures. Rotate the camera or light source to avoid hot spots.

This absolutely does not work (for me). I think it's because the radiance of a laptop/LED TV is not the same as sky radiance- the angular distribution of emitted light isn't the same, so the off-axis bits are illuminated differently. My best results come from imaging a heavily overcast sky (at night).

davenn said:
For that image above I didn't use any darks, flats or bias frames
I occasionally use darks, I should more often as it would help remove hot pixel dots.
Generally I find the vignetting and field flattener adjustments in Lightroom solve most uneven lighting across the frame
Bias frames I have never bothered with

I don't bother with bias or darks, either. The flats definitely help with vignetting, tho- at f/2.8 the effect has to be corrected.

davenn said:
Rather Save the file as a 16 bit TIFF with the setting I have shown in that screenshot of my earlier post

The 32 bit files, are they coming from your camera ?

By any chance, do you have a www site that you could upload a saved 16 bit tiff from DSS to so I could download it and have a play ?

Dave

The 32-bit (per channel) files are output from DSS. It's worth a try to do all my post-processing in ImageJ, why not? I think the easiest way for me to get you any files is Dropbox- PM me with an email and I'll set it up.
 
  • Like
Likes davenn
  • #17
Andy Resnick said:
Good question- I've been playing with flat frames for a while, trying to get good results. I have to use flat field correction, otherwise I give up the outer 30% of my frame. I have 3 sets of flats, 1 set when I image at 800mm and 2 when I image at 400mm, each set of flats is around 40 images. The reason I have 2 sets at 400mm is because one set undercorrects and the other overcorrects- when I average the two stacked results, the background is *significantly* easier to deal with. Yes, yes I know to take the flats with the same aperture settings, etc. I've tried with all kinds of light sources as well.

OK all cool :)

Andy Resnick said:
I'm using a Nikon D810 (not D810A). The lens is mounted onto my motorized tripod.

nice :)

Andy Resnick said:
This absolutely does not work (for me). I think it's because the radiance of a laptop/LED TV is not the same as sky radiance- the angular distribution of emitted light isn't the same, so the off-axis bits are illuminated differently. My best results come from imaging a heavily overcast sky (at night).

OK interesting ... so have you tried the white tee-shirt and illuminate it method ?
you can do that at home any time. you can also aim the camera and lens at a white/off-white wall and do flats that way

My best results come from imaging a heavily overcast sky (at night).

difficult to imagine how you get a balanced illumination that way ---- try the white tee or white wall as a comparison
Andy Resnick said:
I don't bother with bias or darks, either. The flats definitely help with vignetting, tho- at f/2.8 the effect has to be corrected.

darks are good to do, as said it gets rid of hot pixels from the final stacked image
Andy Resnick said:
The 32-bit (per channel) files are output from DSS. It's worth a try to do all my post-processing in ImageJ, why not?

just save the TIFFs as 16 bit from DSS, saves having to do the 32 to 16 conversion later on and negates any problems that conversion may be causing
Andy Resnick said:
I think the easiest way for me to get you any files is Dropbox- PM me with an email and I'll set it up.

DoneDave
 
  • #18
Update- davenn and I corresponded a bit, which I found immensely helpful (thanks!). One improvement I made is a more stringent cutoff for 'acceptable' images. I made two cutoffs- one based on the overall DSS score and the other based on the 'full width half max' output by DSS. I don't know the exact DSS algorithms used to compute these quantities, but the former relates to (among other things) how round the stars are- how much RA drift occurs in a frame- and the latter refers to how much blurring/seeing affects an image. The cutoff scores are likely scene, lens, and sensor dependent; for my M13 images the score cutoff is 200 and the FWHM is 4.7 pixels using a 800/5.6 lens. The 'worst' image looks like this:

worst%20acceptableRGB_zpswfvlqcsp.jpg


Fortunately, 10 hours of imaging yielded about 200 acceptable frames- I'm not proud of the massive inefficiency (5%!)- but I gained a robust method to covert 32-bit/channel DSS images to 8-bit/channel RGB TIFs. This is what I ended up with:

24mbestRGB_zpschowlrsz.jpg


I got this without any tweaking, only 'automatic' per-channel brightness/contrast settings.

Now that I'm getting the hang of PEC (periodic error correction), I expect my efficiency to increase substantially. Which is good, since the Ring nebula (M57) is coming into position.
 
  • Like
Likes davenn and 1oldman2
  • #19
Andy Resnick said:
Fortunately, 10 hours of imaging yielded about 200 acceptable frames- I'm not proud of the massive inefficiency (5%!)- but I gained a robust method to covert 32-bit/channel DSS images to 8-bit/channel RGB TIFs.

Yeah, if you've put in 10 hours of imaging and only 5% of the images are useable then something's wrong. What mount are you using?
 
  • #20
Drakkith said:
Yeah, if you've put in 10 hours of imaging and only 5% of the images are useable then something's wrong. What mount are you using?

I made it sound worse than it is- I've already doubled the amount of time I can 'reliably' acquire a single frame,and I included *total* outdoor time- from setup to teardown. Each frame requires a few seconds to allow the shutter ring-down to dissipate prior to exposure. On a good night I can acquire about 200 images in 90 minutes (actual image acquisition time)- Dr. Resnick needs to be in bed around midnight. I'm generally looking near the celestial equator, so everything is worst-case: Earth's rotation is 15 arcsec/s, each pixel of my sensor covers 1.25 arcsec (800mm lens), so without a tracking mount I can only acquire for about 0.1 seconds before I get star trails. As of yesterday I am able to 'reliably' (keep rate of 20%) acquire for 10s, meaning I've compensated for nearly 99% of the Earth's rotation. After another few rounds of PEC I'll see if I am approaching marginal return or not.

I'm *very* happy with my mount (Losmandy GM-8). Trivial to setup and align.

Going to a shorter lens (400mm), I should be able to acquire for 20 or 30 seconds at a time, with a proportional increase in efficiency.
 
  • #21
Andy Resnick said:
Each frame requires a few seconds to allow the shutter ring-down to dissipate prior to exposure.

Assuming this is when using your D810
You do know that using mirror up / exposing from live view mode significantly reduces camera vibration ?

I'm looking at these 2 posts below ...

Andy Resnick said:
so without a tracking mount I can only acquire for about 0.1 seconds before I get star trails.
Andy Resnick said:
I'm *very* happy with my mount (Losmandy GM-8). Trivial to setup and align.

you have a good mount, but you are not tracking with it ?
Even with reasonable ( non-perfect) polar alignment, you should be able to get 1 - 3 minute exposures ( at least 1 minute) before polar alignment errors became apparent

0.1 (+- a little) exposures is probably the reason for the image problems

just as a comparison ...
here is a single 30 sec exposure of the Omega Centauri globular cluster ...

2015_04_11_3589sm.jpg


note the lack of all the brownish blobbyness that you can see in your image

if you are not doing 1 min or multiple min exposures with that good mount, then you need to get to that point and you will notice a major improvement in your results :smile:
It's going to improve the signal to noise ration greatlyDave
 
  • Like
Likes russ_watters
  • #22
davenn said:
Assuming this is when using your D810
You do know that using mirror up / exposing from live view mode significantly reduces camera vibration ?

Right- that's what I do. I haven't tried staying in live mode, tho. When the mirror goes up, the vibrations need a few seconds to damp out.

davenn said:
I'm looking at these 2 posts below ...
you have a good mount, but you are not tracking with it ?
Even with reasonable ( non-perfect) polar alignment, you should be able to get 1 - 3 minute exposures ( at least 1 minute) before polar alignment errors became apparent

I have *never* (yet) been able to get past 15 seconds @ 800mm or 25 seconds @ 400mm. Never. Not ever. I can't imagine getting to 1-3 minute exposure. Sure, with a shorter lens- 85mm, 50mm, or 15mm, I can expose for a looooooong time. My goal is to get 30s exposures with the 800mm, I'm only at 10s now...

It is worth mentioning that I don't have an autoguider? I recall finding a blog where someone speced out the RA error, IIRC the stock motor has an RMS error of a few arcsec, so I'm well within spec. even before PEC.

What's going on on the left half of your image?
 
  • #23
Andy Resnick said:
Right- that's what I do. I haven't tried staying in live mode, tho. When the mirror goes up, the vibrations need a few seconds to damp out.

that's why you use live mode ... the mirror is already up :wink:
Andy Resnick said:
I have *never* (yet) been able to get past 15 seconds @ 800mm or 25 seconds @ 400mm. Never. Not ever. I can't imagine getting to 1-3 minute exposure. Sure, with a shorter lens- 85mm, 50mm, or 15mm, I can expose for a looooooong time. My goal is to get 30s exposures with the 800mm, I'm only at 10s now...

when it is tracking ? ... you need to spend a little more time on your polar alignment
Andy Resnick said:
It is worth mentioning that I don't have an autoguider?

yup, that is OK, even without one and with a reasonably well achieved polar alignment 1 - 3 minutes shouldn't be a problem

for 5 min or more even with good polar alignment then an autoguider becomes necessary
back in the old days before autoguiders and digital cameras, I had to manually guide using an off-axis guider.
10 - 40 minutes of guiding with my eye stuck to the eyepiece wasn't any fun. particularly on the cold winter nites :wink:
Andy Resnick said:
What's going on on the left half of your image?

dunno, what are you seeing that no-one else has commented on in other forums ? :biggrin:Dave
 
  • #24
davenn said:
when it is tracking ? ... you need to spend a little more time on your polar alignment

It's tracking whenever I power the motors. I have no comparison, but I am confident claiming I can polar align the mount axis to within 1/4 degree. If that's insufficient, I need an alternate method than a polar alignment reticle. I've seen some techniques involving meridian stars (http://astropixels.com/main/polaralignment.html) but it seems like a time-consuming PITA that has to be done every night. Not interested.

davenn said:
yup, that is OK, even without one and with a reasonably well achieved polar alignment 1 - 3 minutes shouldn't be a problem

Well, it's not clear. Again, I have nothing to compare my alignment to- 'good', 'accurate', 'reasonable', what does that mean quantitatively? What's the spec? If I am within 0.25 degree from true polar axis, what maximum exposure should I expect?

Edit: I found this: http://articles.adsabs.harvard.edu//full/1989JBAA...99...19H/0000020.000.html, which seems to claim that I need to be polar aligned to half a degree for a 1 minute exposure using a 800mm lens. But this calculation mentions a 30 micron-sized point spread function, so given my pixel size I'm back down to about 10 seconds for a 1/2 degree misalignment, which is in rough agreement with my experience.

davenn said:
dunno, what are you seeing that no-one else has commented on in other forums ? :biggrin:
Dave

On the left half, the stars are trailing, tracing out a circle. It's most obvious in the upper and lower left corners, but it's the entire left side.
 
Last edited:
  • Like
Likes davenn
  • #25
Andy Resnick said:
On the left half, the stars are trailing, tracing out a circle. It's most obvious in the upper and lower left corners, but it's the entire left side.

ahhhh you are seeing the result of field rotation because I am using Al/Az tracking instead of equatorial tracking :biggrin:
 
  • #26
davenn said:
ahhhh you are seeing the result of field rotation because I am using Al/Az tracking instead of equatorial tracking :biggrin:

If it's so visible at a 30-s exposure, why do you claim to be able to acquire 1-3 minutes single exposures?
 
  • #27
The clouds cleared out for the weekend, two good nights of viewing (C-Town residents had other priorities on Sunday):

55m.RGB_zpsz86n09zs.jpg


55mx2.RGB_zpsmvpze9fq.jpg


55 minutes of exposure, 10s at a time- my PEC efforts are starting to pay off, now keeping twice the number of images (40% vs. 20%), and my 'efficiency' is up around 15% (from 5%). IC 1296 is barely visible (mag. 14.8), I've previously calculated I need at least 1 hour exposure time to get it above the noise floor.
 
  • Like
Likes davenn
  • #28
Andy Resnick said:
If it's so visible at a 30-s exposure, why do you claim to be able to acquire 1-3 minutes single exposures?

Hi Andy, I had missed this response from you

that comment was for you and your unguided EQ mount NOT my Al/Az mount :smile:

nothing to do with that motion on my 30 sec exp ... scroll back a bit further and you will see how the comment related to your systemDave
 
  • #29
Andy Resnick said:
The clouds cleared out for the weekend, two good nights of viewing (C-Town residents had other priorities on Sunday):55 minutes of exposure, 10s at a time- my PEC efforts are starting to pay off, now keeping twice the number of images (40% vs. 20%), and my 'efficiency' is up around 15% (from 5%). IC 1296 is barely visible (mag. 14.8), I've previously calculated I need at least 1 hour exposure time to get it above the noise floor.

that's great ...
I'm assuming M57 the ring neb in Lyra ?
I haven't imaged that one and been many years since I last observed it ... late 8's early 90'sDave

PS. EDIT ... that isn't IC1296 as that is a galaxy, you have imaged a planetary nebula and I still think it's M57 ... IC1296 isn't too far away from it
http://www.deepskypedia.com/wiki/IC_1296

this is IC1296 ...
IC%201296%20mid.jpg


OK I see my misunderstanding LOL till I did comparisons of star fields, I didn't realize it was so faint on your image ... so big arrow to hilite it for others reading the thread ...

upload_2016-6-21_19-36-58.png
 
Last edited:
  • #30
davenn said:
Hi Andy, I had missed this response from you

that comment was for you and your unguided EQ mount NOT my Al/Az mount :smile:

nothing to do with that motion on my 30 sec exp ... scroll back a bit further and you will see how the comment related to your system
Dave

Got it, thanks. I've started to generate enough statistics to feel comfortable making broad statements about my unguided vs. guided mount accuracy. Periodic Error Correction is making a huge difference- PEC training is deadly dull and hard on my eye, but I'm seeing steady improvement and once it's 'good enough' I won't have to train it anymore.
 
  • Like
Likes davenn
  • #31
davenn said:
OK I see my misunderstanding LOL till I did comparisons of star fields, I didn't realize it was so faint on your image ... so big arrow to hilite it for others reading the thread ...

Yeah, it's still down in the dirt. I can pull it out more, but then the non-perfect background subtraction becomes more noticeable as well. I'm also becoming more painfully aware of the display differences between a Mac and WinBlows.
 
  • #32
Andy Resnick said:
I'm also becoming more painfully aware of the display differences between a Mac and WinBlows.

in what way ?
 
  • #33
davenn said:
in what way ?

I think it's 'gamma'- contrast at the low end is higher on my Mac than my windows box.
 
  • #34
That's not really Microsofts fault now is it?
 
  • Like
Likes davenn
  • #35
glappkaeft said:
That's not really Microsofts fault now is it?

maybe not
one would have to do a comparison with the same brand monitors and both calibrated

Andy, unless your monitors were calibrated, you can't really do a comparison, there's just too much variation

here's a professional calibration tool ...
http://www.digitalcamerawarehouse.com.au/prod8682.htm
Dave
 

Similar threads

  • Astronomy and Astrophysics
2
Replies
43
Views
10K
  • Astronomy and Astrophysics
Replies
5
Views
1K
  • Astronomy and Astrophysics
Replies
1
Views
4K
Back
Top