Jupiter, on the night of 2025-01-19 UT
Figure 1. Jupiter 2025-01-19 6:13.9 UT
I might have said this sort of thing before, but I'll explain it again here for the sake of newcomers. Planetary astrophotography using lucky-image processing has its pros and cons, compared to more conventional deep sky astrophotography.
Planetary astrophotography pros:
- Planetary cameras are cheaper. They're a fraction of the cost of big, expensive, cooled cameras commonly dedicated to deep sky astrophotography. For planetary, you don't need a cooled camera.
- You don't need a separate guide camera or guide scope. If any automated guiding is done at all (FireCapture supports guiding if you're curious), it's done by the main planetary camera used for imaging. And if you don't use automated guiding, planetary is pretty forgiving when manually guiding. All you need to do is to use the hand-controller or whatnot, to ensure the target doesn't drift out of frame during acquisition.
- Since the planetary camera's sensor is small, that means any filters you use can also be smaller, and thus cheaper.
- Calibration frames -- FLATs and DARKs -- are not as important for planetary. Sure, you can make these if you really want to, but they'll have comparatively little impact. It's fine to neglect them.
Planetary astrophotography cons:
- Planetary involves a stupid amount of raw data. Here, "stupid" refers to one step above "ludicrous" and two steps above "incredible." It's just a stupid, stupid amount of data. For the session I posted here, it took between 1 and 2 terabytes of raw data. That's between 1000 and 2000 gigabytes of raw data. Planetary can fill up a modern storage drive in a single session.
- While guiding isn't really an issue, pointing is. Finding your target in the first place can be challenging.
- Planetary astrophotography is more hands-on than deep sky. It doesn't lend itself as well to automation. I try to automate wherever I can, but planetary presents some unique challenges and there's only so much one can do.
- When processing planetary data, it helps to have a pretty beefy computer. And you'll need some method of transferring the (stupid amount of) raw data to the beefy computer. (Transferring a terabyte of data over a Gigabit ethernet connection still takes hours. If you rely on wireless, it might take over a day, just to transfer the data. That doesn't include processing.)
Lucky-imaging:
The goal of lucky-imaging is to mitigate atmospheric seeing. Atmospheric seeing will warp and distort the target with a characteristic time on the order of tens of milliseconds or so. To counteract this, you need to take many, many images, each with a short exposure time to get many, many snapshots of the target. Each "snapsot" will be warped a little differently than the rest, to varying extents. For the project posted here, each of the "snapshot" images were about 12 milliseconds in exposure time. They were recorded in the planetary camera in the form of videos. Each video lasted 40 seconds. Any longer than ~40 sec and the resulting output would be blurred not just from atmospheric seeing, but from the planet's rotation.
The lucky imaging software then evaluates all the snapshots and throws away the worst of them. In my case, for what's shown in this post, 50% of them.
If your target is large enough and bright enough (Jupiter, Saturn, Mars, the Moon, or the Sun make good candidates), the lucky imaging software can "latch-on" to surface details on the target, and de-warp the individual snapshots using the ensemble average as a reference.
Only then, after lucky selection and de-warping, are individual snapshots stacked. Stacking is where the Central Limit Theorem comes into play to increase the signal to noise ratio.
Another step involves sharpening via software. There will always be some residual blur from the atmosphere, and conventional sharpening software does a pretty good job at undoing that.
The final result is an image that has details on par with the diffraction limit of the telescope optics.
De-rotation:
Once you've done the lucky-imaging processing and initial sharpening, you can do some additional and intentional "warping" to account for the planet's rotation, allowing you to stack a few more processed images into a single image. Think of this as a noise reduction technique, giving the Central Limit Theorem one last go at the data. This is the primary purpose of WinJUPOS derotation.
Here's another image of the session:
Figure 2: Jupiter 2025-01-19 4:04.3 UT. Here you can see a bit of an outbreak in the SEB (South Equatorial Belt) to the right in the image, and some neat weather patterns in the EB (Equatorial Band) in the center.
And finally, here's a video I put together of session, where each frame is lucky-imaging processed image. It goes without saying that these videos take a lot of time and effort.
Figure 3: Video. Jupter from 4:04 to 6:52 UT.
Equipment:
Celestron C14 EdgeHD (telescope)
Sky-Watcher EQ8-R Pro (mount)
TeleVue 2x PowerMate (a fancy Barlow lens)
Astronomik RGB filter set
ZWO ASI 290MM (monochrome camera)
Software:
FireCapture (for acquisition)
AutoStakkert! (lucky imaging processing)
WinJUPOS (for derotation)
PixInsight with RC Astro plugins (misc. processing)
CyberLink PowerDirector (to make the video)
Acquisition/Processing:
Location: San Diego
Atmospheric seeing: Not too terribly evil.
Sub-frame exposure time: ~12 ms.
Acquisition video length: 40 sec, alternating R-G-B-R-G-B...
Lucky Imaging:
Best 50% frames kept
Drizzle/Resampling not used.
Initial sharpening was done on the AutoStakkert! output images using PixInsight processes:
MultiscaleLinearTransform
UnsharpMask
For each color channel (R, G, or B) 5 images were derotated and stacked using WinJUPOS. Images were then combined into a single RGB image, also using WinJUPOS. Derotation was also done to keep a smooth continuity for times during re-focusing and meridian flip.
Final adjustments in PixInsight using
CurvesTransformation
NoiseXTerminator
BlurXTerminator
243 Individual, processed frames were then imported to PowerDirector to create the final video.