Level of details in prime focus vs eyepiece images

In summary: You will have poorer results if you are not stacking images - I suggest looking at a youtube video or two on how to take a short movie of the sun and stack the frames within it - there are multiple free tools for the job. The final thing in your list is really the matching of the chip pixel size to the image size but you need to look at the other bits first as they are the most significant and you should be able to get some decent results.
  • #211
You'll probably get the most bang for you buck by doing lucky imaging techniques on the video of Saturn. If you could find a way to convert your video to a format used by Autostakkert! (such as .avi), and then process that in Autostakkert!, I think you'll find the results quite pleasing.

Ideally, of course, your original video should be a video without compression. But that's not an option on a cell-phone, since it would take many, many gigabytes just to store a short video. But I've processed video before which started out compressed, and it does work. It doesn't work as well as having the original uncompressed, but it does work.

The stacking will average out compression artifacts, bringing back some detail in the stacked image -- detail which can then be brought out further with appropriate wavelet sharpening.

Without the stacking, using just a single, compressed image, the wavelet sharpening is really just sharpening up the compression artifacts.

Edit: My mistake, I see you're using a DSLR. But the comment remains: if your DSLR allows you to store the video in uncompressed format, use that. If not, there's still hope: Convert it to .avi and run it through AutoStakkert! and results should still be better than with no lucky imaging stacking.

Another Edit: After rereading your posts, I see you did some lucky imaging techniques after all. What program did you use for your lucky imaging stacking?
 
Last edited:
  • Like
Likes Devin-M
Astronomy news on Phys.org
  • #212
collinsmark said:
Ideally, of course, your original video should be a video without compression. But that's not an option on a cell-phone, since it would take many, many gigabytes just to store a short video. But I've processed video before which started out compressed, and it does work. It doesn't work as well as having the original uncompressed, but it does work.
My cam only shoots MP4 but yes I've seen that it does work on AS!. I use PiPP to convert MP4 into .AVI or .SER along with centring and rejection of bad frames, then push it to AS!3.

I also noticed that a higher FPS video makes a difference. It would also be nice to have an uncompressed format, but I feel another bottleneck is the 50 FPS maximum that my cam offers. I should start using an entry-level USB cam for planetary work, as they can go past 100's of FPS and can record raw.
 
  • #213
I did a star sharpness test with a 30 second tracked exposure at 1800mm f/12 on the cheap $425 Star Adventurer 2i Pro mount overloaded by a factor of 2-3x above the weight limit...

IMG-7301.jpg


IMG-7301_cropped.jpg


IMG-7301_cropped_stars.jpg

Center (RA, Dec): (0.240, -5.156)
Center (RA, hms): 00h 00m 57.532s
Center (Dec, dms): -05° 09' 20.635"
Size: 30.5 x 20.3 arcmin
Radius: 0.305 deg
Pixel scale: 0.463 arcsec/pixel

I'm supposedly at 0.463 arcsec/pixel and if I multiply that by the 8px dim star radius I'd call it around 3.7 effective arcsec/pixel... around 3x sharper than the 600mm f/9. I'm more surprised by how round the stars are with a 30 second exposure at such a long focal length (1800mm) on such an overloaded cheap mount...

Notice I’m using all 3 counterweights from all 3 trackers and a heavy dslr with lens and panhead just as a counterweight….!
1F1FCC83-DB6D-42F4-B065-1701019B3F5B.jpeg
 
  • Like
Likes collinsmark
  • #214
Devin-M said:
I did a star sharpness test with a 30 second tracked exposure at 1800mm f/12 on the cheap $425 Star Adventurer 2i Pro mount overloaded by a factor of 2-3x above the weight limit...
Interesting, I found people on other sites complaining about Star Adventurer not being able to track very accurately even without crossing the payload limit and with precise polar alignment, I guess either the difference is in the "Pro" model or you have extraordinary balancing skills.
 
  • #215
PhysicoRaj said:
Interesting, I found people on other sites complaining about Star Adventurer not being able to track very accurately even without crossing the payload limit and with precise polar alignment, I guess either the difference is in the "Pro" model or you have extraordinary balancing skills.
I have 3 bits of advice:

1) Use the PS Align app on your phone to find out exactly where to aim the scope in relation to Polaris (this will vary by time)

2) Polar align after putting the telescope / cameras etc on the mount— it’s tempting to polar align before you put on the heavy telescope but there is enough wiggle that if you polar align before adding the telescope, the polar alignment will change when you add on the extra weight. I do a very rough alignment before adding the telescope and then fine alignment after the scope is added.

3) Make sure the telescope is perfectly balanced not only in the right ascension axis but also the declination axis (which could be impossible without extra equipment)… I use a macro focusing rail to balance in the declination axis... it acts much as a “dovetail” found on many telescopes.

3DE288E2-6A1D-4503-B30A-6FB68763C831.jpeg
 
Last edited:
  • Like
Likes collinsmark and PhysicoRaj
  • #216
collinsmark said:
Another Edit: After rereading your posts, I see you did some lucky imaging techniques after all. What program did you use for your lucky imaging stacking?
I was shooting stills in RAW mode so I manually chose the less blurred frames and then I stacked and wavelet sharpened in Lynkeos as I use MacOS.
 
  • Like
Likes collinsmark
  • #217
I should mention it took nearly 2 minutes for the telescope to stop wobbling after I pushed the button... I was shooting in interval shooting mode, 3 second shutter delay after mirror flip up, 30 seconds per shot, 10 second delay interval between shots, 6400 iso. The test shot I showed was the 5th shot taken in interval shooting mode, each 30 seconds long, and the first 3-4 shots had signs of motion blur from the tripod still wobbling for 90-120 seconds after I pushed the button, which progressively diminished until the 5th shot.
 
  • #218
PhysicoRaj said:
Interesting, I found people on other sites complaining about Star Adventurer not being able to track very accurately even without crossing the payload limit and with precise polar alignment, I guess either the difference is in the "Pro" model or you have extraordinary balancing skills.
I pushed the Star Adventurer 2i Pro mount even further with 3.5 minute subframes at 2180mm focal length f/14.5 a couple of nights ago while targeting the Phantom Galaxy (Messier 74). This was the best single 3.5 minute exposure I obtained...

view in WorldWideTelescope

IMG-7783-2.jpg


4246498-1-png.png


4246498-2-png.png


You can see from this video the other 22 frames came out as garbage from the wind which was present...

Nevertheless I stacked 16 of the 22 subframes and obtained this stacked image:

phantom_galaxy.jpg


The mount was overloaded probably between 2-3x over the weight limit (all 3 counterweights from the 3 trackers I own were used on a single mount, and a camera with 600mm f/9 lens & panhead was also used as additional counterweight)

8569733b-934a-42c5-86cd-5af930d75e99-jpeg.jpg
 
Last edited:
  • Like
Likes PhysicoRaj and collinsmark
  • #219
More tracking/sharpness tests from last night with the Star Adventurer 2i Pro cheap tracker ($425) overloaded at least 3x over the weight limit (but well balanced in both RA and Dec axis) at 2180mm focal length f/14.5 (Meade LX85 M6 “1800mm f/12” Maksutov-Cassegrain) and a Nikon D800 DSLR with 30 second exposures at 1600iso, this is a single sub-frame, with a dim star enlarged... calm conditions (no wind)... I took 120 exposures of 30 seconds each (3 second shutter delay after mirror flip-up) in moonless bortle 6 conditions and about half of them came out with round stars... this dim star has roughly 8px radius at 0.463 arcsec/pixel with the telescope/sensor combo.

8x "Nearest Neighbor" Enlargement:
IMG-7883_star_8x_enlargement.jpg


100% Crop:
IMG-7883_100pc_crop_800x534.jpg


Full Frame:
IMG-7883_full_frame_800x534.jpg


Final Image (Stacked, Histogram Stretched & Cropped):
Orion_620x620_square_crop.jpg


Telescope:
72277B57-23B6-4A63-88D9-E07352A3E537.jpeg

06137F5A-D1A4-43D6-81F2-61CBE108861D.jpeg

33966B50-92A7-41CB-9B2F-4C22870DA1B8.jpeg


Bahtinov Focusing Mask Diffraction Spikes:
DSC_7869.JPG
 
Last edited:
  • Like
Likes collinsmark
  • #220
I like the dynamic range of this Orion image. What post-processing did you do?

I always thought DR is a characteristic of the sensor. Not sure if optics or anything else also can affect. So one question that is still in my head is whether a stack of several frames has a better dynamic range than a single sub exposure? I was thinking that (ideally) by stacking 3 kinds of frames - under-exposed, mid-exposed and over-exposed, I could get an image with better DR than the 3 individual frames.
 
  • #221
PhysicoRaj said:
I like the dynamic range of this Orion image. What post-processing did you do?

I always thought DR is a characteristic of the sensor. Not sure if optics or anything else also can affect. So one question that is still in my head is whether a stack of several frames has a better dynamic range than a single sub exposure? I was thinking that (ideally) by stacking 3 kinds of frames - under-exposed, mid-exposed and over-exposed, I could get an image with better DR than the 3 individual frames.
The key is knowing that when shooting in RAW mode, there is a lot of color information initially hidden in the dark areas of the image which you later bring out by histogram stretching. So I do test exposures, starting with low iso and low shutter times usually 30 seconds and 400 iso. If I can’t see the object on the screen, I start by gradually increasing the iso until I can just barely see the object. If I reach 6400 iso and still don’t see it then I begin increasing the shutter time to 1 minute, 2 minutes, 5 minutes, 10 minutes, 20 minutes etc. Whenever I just barely start to see the object I know it’s a proper exposure and that I’ll bring the rest of the detail out with histogram stretching in Adobe Lightroom after stacking. The more dynamic range your sensor has, the better, so consequently 14bit and 16bit (per channel) raw files are better than 12 bit raw files. The Nikon D800 I use produces 14 bit raw files whereas a lot of lower-end cameras that were produced at the same time only generate 12 bit raw files. Each additional bit essentially doubles the color palette the camera is working with, but those additional colors are generally hidden (at first) in the darkest parts of the picture.

This was the RAW file as it looked in-camera with no histogram stretching (30 seconds, 1600iso):

DSC_7883.JPG


Final image after stacking / histogram stretching / cropping:

orion_620x620_square_crop-jpg.jpg
 
Last edited:
  • Like
Likes PhysicoRaj and collinsmark
  • #222
PhysicoRaj said:
I like the dynamic range of this Orion image. What post-processing did you do?

I always thought DR is a characteristic of the sensor. Not sure if optics or anything else also can affect. So one question that is still in my head is whether a stack of several frames has a better dynamic range than a single sub exposure? I was thinking that (ideally) by stacking 3 kinds of frames - under-exposed, mid-exposed and over-exposed, I could get an image with better DR than the 3 individual frames.

Increasing the effective dynamic range can be achieved by stacking, without the need for HDR techniques, as long as
  • the sensor gain (i.e., ISO for a DSLR) is somewhere around unity, or greater than unity (where unity gain is 1 ADU per electron), and
  • the stacked image has a sufficiently greater bit depth than the individual sub-frames. (e.g., the RAW images have 12-bit depth, while the stacked images have 16- or 32-bit depth.)

Essentially, stacking relies on the Central Limit Theorem, not only to reduce noise, but also to increase the effective dynamic range (in part by reducing quantization noise).

When choosing your exposure times of your sub-frames, and assuming the above criteria is met, there is a tradeoff choice to be made regarding fewer count of longer exposure sub-frames vs. greater count of shorter exposure sub-frames.

The biggest con of greater count of shorter exposure sub-frames is increased read noise, all else being equal. The biggest pro is greater dynamic range.*

Thermal noise (a.k.a., dark current, amp glow) and light pollution noise are virtually uneffected either way by this choice.

*(Edit: the greater count of shorter exposure sub-frames has additional advantages too, such as being less susceptible to guiding and tracking errors, wind vibration, any other vibration [with the exception of DSLR mirror flip/shutter release vibrations], airplane trails, satellite trails, cable snags, etc.)
 
Last edited:
  • Like
Likes PhysicoRaj and Devin-M
  • #223
collinsmark said:
  • the sensor gain (i.e., ISO for a DSLR) is somewhere around unity, or greater than unity (where unity gain is 1 ADU per electron)
So now I have to find out the unity gain ISO for my camera. The DR vs ISO for my cam is pretty linear ( see image below, from DXOMark).

The biggest con of greater count of shorter exposure sub-frames is increased read noise, all else being equal. The biggest pro is greater dynamic range.
We have ways to take out read noise, like dithering and calibration, right? If read noise can be effectively eliminated, the advantages of short-enough subframes become more obvious.

1638249398320.png
 
  • Like
Likes collinsmark
  • #224
PhysicoRaj said:
We have ways to take out read noise, like dithering and calibration, right? If read noise can be effectively eliminated, the advantages of short-enough subframes become more obvious.

Read noise cannot be eliminated, unfortunately. Boy, howdy that would be nice. But no.

There are two aspects of read noise of interest: mean (i.e., "average") and standard deviation. Calibration can help with the mean, but is helpless regarding the standard deviation.

Dark frames are a way to characterize the mean of the read-noise + thermal noise. Bias frames concentrate specifically on the mean of the read noise (no thermal noise). Dark frame subtraction and/or bias frame subtraction can effectively eliminate the mean of the read noise. Neither of which can be used to eliminate the standard deviation of the noise, however.

Dithering is way to combat any imperfections of your calibration by shifting the signal around a little in the spatial domain (shifting it to nearby, but different, pixels). Again though, it does nothing in terms of eliminating the residual noise, rather it just smears the residual noise, spatially.

For a given camera gain (ISO setting for DSLRs) each pixel of each sub-frame will receive noise with roughly constant mean, [itex] \mu_r [/itex], and constant standard deviation, [itex] \sigma_r [/itex]. As mentioned earlier, the mean part of the read noise, [itex] \mu_r [/itex], can be effectively eliminated via calibration, but the part of the noise represented by the standard deviation just accumulates with more and more sub-frames: it accumulates proportionally with the square root of the number of sub-frames. (It's true that when averaging sub-frames [i.e., stacking], the effective read noise will be reduced by [itex] \frac{\sigma_r \sqrt{N}}{N} = \frac{\sigma_r}{\sqrt{N}} [/itex], implying the more frames the better. But don't be mislead into reducing the exposure time of sub-frames for the sole reason of having more sub-frames. Remember, the signal strength of each sub-frame is proportional to sub-frame exposure time. So by reducing the sub-frame exposure time you're also reducing the signal strength of each subframe, thus reducing the signal to noise ratio of each sub-frame).

Recall that the standard deviations of other forms of noise, such as thermal noise or light pollution, are independent of the number of sub-frames. These types of noise accumulate with the square root of time, whether or not you take many short exposures or one big, long exposure. You can combat the mean of the thermal noise with dark frame subtraction, but you can't do squat about the standard deviation (for a given exposure time and temperature). And you can't do anything at all about the light pollution (except by reducing it from the get-go by traveling to a darker location or, to some degree, by using filters).

So when determining your "optimal" exposure time for sub-frames, your goal is to increase your exposure time such that the standard deviation of thermal noise + light pollution (technically this is [itex] \sqrt{\sigma_{th}^2 + \sigma_{lp}^2} [/itex]) is roughly greater than or equal to the standard deviation of the read noise, for a single sub-frame. In other words, increase the sub-frame exposure time such that read noise is not the dominant noise source.

Btw, all this applies to deep-sky astrophotography targets. Planetary targets using lucky-imaging techniques are a different beast. In planetary astrophotography, you'll gladly sacrifice some signal strength of sub-frames to gain many, clear snapshots of the target, warped and noisy as they may be, to combat atmospheric seeing. In planetary astrophotography, it's understood that read noise is the dominant noise source, by far.

[Edit: added boldface to the line summarizing my main point.]
 
Last edited:
  • Like
Likes PhysicoRaj and Devin-M
  • #225
collinsmark said:
There are two aspects of read noise of interest: mean (i.e., "average") and standard deviation. Calibration can help with the mean, but is helpless regarding the standard deviation...

...So when determining your "optimal" exposure time for sub-frames, your goal is to increase your exposure time such that the standard deviation of thermal noise + light pollution (technically this is [itex] \sqrt{\sigma_{th}^2 + \sigma_{lp}^2} [/itex]) is roughly greater than or equal to the standard deviation of the read noise, for a single sub-frame. In other words, increase the sub-frame exposure time such that read noise is not the dominant noise source.
Thanks for the detailed crash course on noise. Now applying this to HDR stacking, the minimum exposure length of the 'under-exposed' set of frames should be such that read noise is suppressed by thermal and other sources of noise. Along with this, I choose an ISO that is more than unity gain and still underexposes the image. Correct me if I am wrong.
 
  • #226
sophiecentaur said:
1633636039108-png.png
I found this. An expensive mount, if I'm not mistaken.
See here.

Devin-M said:
6dbdab18-9c8f-4ff3-8e8a-3911a002b6c6-jpeg.jpg

“…the goldman array…”

A new triple-aperture space telescope has been launched…

The Imaging X-Ray Polarimetry Explorer…

https://en.m.wikipedia.org/wiki/IXPE

Imaging_X-ray_Polarimetry_Explorer.jpg
1280px-IXPE-space-telescope-drawing.png
1280px-IXPE-artist-rendition.jpg
 
  • Like
Likes PhysicoRaj and collinsmark
  • #227
Devin-M said:
A new triple-aperture space telescope has been launched…
In space, you have very little vibration or creep so a spindly mount (or multiple mounts) will not give colimation errors. That's why people use such chunky mounts. I think you will find the relative motions of the cameras as the three tripods react differently to a moving load (and the wobble every time a truck goes by your house) will become obvious to you, eventually and you will find a need for a more conventional approach with guiding.
But hell - you will have fun along the way and stories to tell.
 
  • Like
Likes collinsmark

Similar threads

  • Astronomy and Astrophysics
Replies
25
Views
1K
  • Astronomy and Astrophysics
Replies
27
Views
5K
Replies
8
Views
1K
Replies
2
Views
2K
  • Astronomy and Astrophysics
2
Replies
39
Views
5K
  • Astronomy and Astrophysics
Replies
32
Views
10K
  • Introductory Physics Homework Help
Replies
1
Views
2K
  • High Energy, Nuclear, Particle Physics
Replies
2
Views
990
Replies
8
Views
5K
  • Introductory Physics Homework Help
Replies
1
Views
2K
Back
Top