DennisN said:
Absolutely gorgeous photos of Jupiter and Saturn,
@collinsmark !
Thank you!
After I saw your first Jupiter photo I was about to ask if you have shot Saturn too, and then you posted one with Saturn.

It's very cool and inspiring to see your photos with such amounts of color and detail; to me, who has been looking at Jupiter and Saturn mostly as yellow blobs in my scope lately, your photos come across to me as quite 3D even though they are 2D.
If I understood correctly, you shot individual sequences/films for each channel (RGB), correct? A funny thing is that I've been thinking about that a while ago, but I hadn't put it to test.
I take the red, green and blue sequences separately (each with their corresponding red, green or blue filter) because my camera is monochrome. This allows me to not deal with the Bayer matrix of a color camera. The Bayer matrix on the camera blocks some light and effectively reduces the resolution of the image.
On the other hand, dealing with filters and the filter wheel and combining the colors in post-processing is not without its headaches either. Using a color camera is easier. I have a color camera that I use for certain occasions. But when trying to eke out the best image with something relatively stable in the sky (like most celestial objects) I'll choose the monochrome camera and filter wheel.
(Oh, and just so we're clear, I wouldn't use color filters together with a color camera. With a color camera, the only filters I would use is a UV/IR blocking filter and/or a light pollution filter. If you have a color camera, by all means take all three color channels together at the same time!)
For the images I posted of Saturn and Jupiter, I used a technique called "Lucky Imaging." Here's an article on it. By the way, as a rule of thumb, I pretty much always use 50% of images for stacking.
https://skyandtelescope.org/astronomy-blogs/imaging-foundations-richard-wright/lucky-imaging/
I used FireCapture (free software) to control the camera and filter wheel when actually taking the data from the camera. The individual frames were stored in the form of .ser video files. I could have used .avi files, but .ser is becoming more standard for astrophotography these days.
http://www.firecapture.de/
The stacking software was Autostakkert! (this is done later, like maybe the next morning or day).
https://www.autostakkert.com/
The wavelet sharpening (this is where the images goes from a somewhat blurry blob to something quite sharp) was done with Registax. In the future, I might do this step in Pixinsight. But I wanted to do all the planetary images I posted here with free software, so I used Registax and that does a good job. If you've never used Registax before, it's also capable of stacking (and was the first widely available program to do it). But I do my stacking with Autostakkert! If you want to use Registax just for the wavelet sharpening, just open your image file (like something in .tif filetype) and it will jump straight into the wavelet sharpening.
https://www.astronomie.be/registax/
And finally, instead of using Photoshop (I used to use Photoshop until they switched over to their subscription only platform) I use Gimp for color adjustments, contrast adjustments, and final editing. Gimp is free and pretty capable.
https://www.gimp.org/
Have you tried shooting Mars with this technique?
Not yet, but I'm planning on it. Mars opposition is October 13 (in 2020) so any night around that time, even roughly, would be a good time to image it.
Here's my planetary imaging setup (laptop and cables not shown):
I adjust the atmospheric dispersion corrector (ADC) by removing the filter wheel and camera and putting in a 1.25" diagonal and eyepiece in their place, which just happens to be nearly parfocal, in my case. Either that, or I'll swap out the monochrome camera with the color camera for the ADC adjustments.
It's easier to do ADC adjustments in color where you can see the chromatic fringing. Otherwise it's difficult to tell the difference between focusing problems and ADC problems.
Then I put the filter wheel and monochrome camera back in place and image.
Here's a cropped part of the a single frame that I took of Saturn (this particular frame was one of many that had the Red filter in the optical train):
It looks pretty noisy, huh. It's supposed to be that way!

(Well, I should say it's at least
expected to be that way.) My goal is to get as many frames per second, as fast as possible (within reason -- it can't be
all noise; the planet has to be clear enough such that the software like Autostackkert! is able to perform its image stabilization). So my camera (USB3.0 capable) is chugging away at about ~78 fps, which limits my shutter speed to be around 12-13 ms. The dominant noise source under those conditions is the camera's CMOS sensor's read noise. Those are the reasons I chose the ZWO ASI290MM for planetary imaging: it's USB3.0 capable for fast frame rate downloads, it has a fairly decent read noise, and it doesn't break the bank (compared to other astrophotography cameras). By the way, if you're choosing a camera for planetary imaging only, don't bother with a cooled camera (unless you already have one, or want the same camera for both planetary and deep sky); thermal noise is not the dominant noise source in planetary imaging, it's read noise that matters here (to be thorough, some of the noise you see in the above frame is Shot noise of individual photons coming from the planet itself -- but it's not thermal noise, is my point). Cooling the camera doesn't really help much here.
From there you let the magic of the Central Limit Theorem get rid of the noise for you. That's what the stacking is all about. Noise, whether it be read noise, Shot noise, or from some other source, is in the form of variation, or standard deviation about the mean. It's the mean of the signal that we're looking for. The standard deviation is what we are trying to minimize. And, per the central limit theorem, if you stack (i.e., average) N individual frames, the standard deviation is multiplied by a factor or \frac{1}{\sqrt{N}}, meaning your signal to noise ratio (SNR) increases by \sqrt{N}.
For reference, that image of Saturn in post #879 start with raw data at around 56,000 frames per color channel. Half of those were thrown away (per Lucky Imaging), meaning that roughly 28,000 frames or so were stacked for each color channel. That corresponds to an SNR increase of roughly 168 (or roundabouts).