Stargazing Level of details in prime focus vs eyepiece images

AI Thread Summary
The discussion centers on the differences in detail and clarity when viewing the Sun through an eyepiece versus capturing images at prime focus with a DSLR. Users noted that while the eyepiece provided a larger and sharper image, the prime focus images lacked detail, likely due to factors like JPEG compression and focusing challenges. The importance of stacking multiple frames to enhance image quality was emphasized, as well as the need for proper exposure settings to avoid losing detail. Additionally, the weight of the DSLR may complicate focusing, leading to potential image blur. Overall, achieving better results requires addressing focus, exposure, and image processing techniques.
  • #151
Devin-M said:
I think it’s as simple as cropping out the empty space around the planet before uploading and what remains will be displayed at a larger apparent size.
Exactly, for the focal length I used (2000mm), smaller sensor (1.6x) and the crop I did (1920p to 1000p), I expected a larger apparent size of the planet but I am seeing less. Your image, is lower focal length (600mm), bigger sensor (1.6x than mine) but still has larger apparent size than mine. How much did you crop the image before uploading here?
 
Astronomy news on Phys.org
  • #152
I cropped mine a lot.

I just cropped and enlarged your image and got this which looks very similar to mine.

807AE10C-2479-4275-ADBB-312AE230C6CF.jpeg


mine for comparison:

https://www.speakev.com/attachments/saturn_stacked_mono_green2-gif.150147/
 
  • #153
With mine I shot in RAW mode at 7360x4912 not video mode at 1920x1280… you probably lost quite a bit of detail by doing that.
 
  • #154
Well yes, makes sense then.

I think the loss of detail is because my camera records only MP4, even in max video resolution of 4K. I don't have an option of uncompressed AVI or SER or even lightly compressed MKV/MOV.

The 4K recording is 25 fps which seems very low, so I went to 50 fps that gives 1080p HD.

Provided the planet occupies a fixed number of sensor pixels at a given focal length, the video size is effectively just cropping it down, but the compression of MP4 is insanely lossy.
 
  • #155
PhysicoRaj said:
I think the loss of detail is because my camera records only MP4, even in max video resolution of 4K. I don't have an option of uncompressed AVI or SER or even lightly compressed MKV/MOV.
There you go! Camera characteristics are things you just have to buy your way out of. You could take a change of direction for a while and image different objects - objects more suited to your camera lens and sensor. There is no shortage of them and you can get some very satisfying stuff - particularly because you can be looking up, rather than near the boiling horizon for planets.
One day you can spend a load of money on an appropriate OTA, Mount and Camera but you will never get the sort of planetary images that you crave with what you have. That's just being pragmatic.

PS I was wondering whether a large set of still images might give you enough for dealing with 'planetary problems' and give you inherent high res.
 
  • #156
I wouldn’t be so sure that when you switched to 1080p that the image was “cropped.” “Resized” is more likely which, if true, threw away a very significant amount of the resolution of the planet.

My D800 which has 7360x4912 resolution also shoots video in 1080p, but it doesn’t “crop” the full frame, it “resizes” the full frame. So in my camera’s case If I was shooting in 1080p, I’d be starting with 4912 pixels in the vertical axis but in 1080p mode I’d be down to 1080 pixels in the vertical axis in other words the resolution in the vertical axis would only be 1/4.5 as high as the maximum possible reosolution if I had made that choice.
 
  • #157
Devin-M said:
So in my camera’s case If I was shooting in 1080p, I’d be starting with 4912 pixels in the vertical axis and after resizing I’d be down to 1080 pixels in the vertical axis
That is what I (and most other digital photographers?) would call cropping, which loses information. Re-sizing is just altering the size of a displayed image. Re-sizing can involve cropping when you are displaying an image with a modified aspect ratio without distorting.
 
  • #158
Cropping would be when you remove pixels only from the edges of the image (like if the 1080p came only from the central pixels of the sensor, all of which are preserved except the chopped off edges) which won’t change the resolution of the planet. Resizing is when you throw away pixels in between other pixels which does change the resolution of the planet. At least on my camera, If I shot in 1080p, my Saturn resolution would only be 1/4.5 as high as shooting in RAW mode.
 
  • #159
Devin-M said:
Cropping is when you remove pixels only from the edges of the image (like if the 1080p came only from the central pixels, all of which being preserved) which won’t change the resolution of the planet, resizing is when you throw away pixels in between other pixels which does change the resolution of the planet. At least on my camera, If I shot in 1080p, my Saturn resolution would only be 1/4.5 as high as shooting in RAW mode.
I would say you are using the terms in an uncommon way. Cropping gets rid of information (no question of that because there will be pieces of the photograph that end up on the floor. In the case a a picture of a planet, you are sort of lucky that the background stars may not be what you wanted (but what about the Jovian moons?). Any loss of information may have consequences.

Describing re-sizing as 'throwing away pixels' is not accurate, or at least a really bad way of doing it. If you want to alter the (spatial) sampling rate (increasing or decreasing), an algorithm will make best use of the resulting samples by interpolation and will lose no information. If you have a small image of a planet, with poor resolution, and you want a bigger one, you will not be throwing away anything but going for the best interpolation formula. Repeating samples would be a really naff thing to do. Those two descriptions you used will only be even possible for changes with integer ratios of pixel spacing.
 
  • #160
I should be more clear… resizing to a smaller size throws away information which is what I suspect was done here. Resizing to a larger size doesn’t necessarily lose any information especially when interpolation is disabled. Resizing to a larger size with interpolation reduces sharpness.
 
  • #162
One way you can test if you’re losing resolution in 1080p mode…

take a short clip in 4k mode, and then another clip in 1080p mode… if they both have the same framing you know you lost resolution because the image wasn’t cropped, it was resized to smaller dimensions which will negatively affect the resolution of the planet.

In other words, if the framing stays the same when going from 4k to 1080p (ie all the same objects are still in frame in the same positions), it means you threw away pixels in between other pixels (resizing smaller) rather than only throwing away pixels from the edges of the sensor (cropping).
 
Last edited:
  • #163
Here I've substituted a hummingbird for a planet to demonstrate the different display options.

The 1st thing to consider is this site will resize any image you upload to no more than 620 height or 800 width. Knowing this fact, how do you shoot and process the image to get the highest angular resolution on the target in the final display environment?

Here are several examples:

1) Full frame image (7360x4912 jpg), uploaded and reduced by the server to 800 width:
620p.jpg


2) Shot in simulated 1080p HD 16x9 ratio, uploaded and reduced by the server to 800 width:
1080p_to_620p_16x9.jpg


3) Full frame image (7360x4912 jpg), cropped to 620 height, 3x2 ratio prior to uploading, reduced by server to 800 width:
4912p_cropped_to_620p_3x2.jpg


4) Shot in simulated 1080p HD 16x9 ratio, cropped to 620 height, 3x2 ratio prior to uploading, reduced by server to 800 width:
1080p_cropped_to_620p_3x2.jpg


We can see from the above demo that for highest angular resolution in the final display, option 3 is best-- "Full frame image (7360x4912 jpg), cropped to 620 height, 3x2 ratio prior to uploading."

That would be the equivalent of shooting in RAW mode, then cropping (not resizing) the image to 620 height (or 800 width), and then uploading to the server.
 
  • #164
Devin-M said:
Here’s a good article on resizing vs cropping…

https://www.photoreview.com.au/tips/editing/resizing-and-cropping/
I wouldn't describe that article as good. It says that resizing means 'throwing away pixels'. As I mentioned before, any photographic processing software worth its salt never just throws away pixels. the individual pixel element values are samples of the original scene. To resize an image requires the appropriate filtering in order to minimise any loss of information or creating distortion of spatial phase or frequency of the components of the original image. The 'appropriate filtering' basically starts by reconstructing the original image (akin to the low pass audio filter which gets rid of the sampling products from an audio ADC). This image can be reconstructed perfectly if the original sampling has followed the rules (Nyquist) and it can be resampled downwards by applying a further Nyquist filter. Nothing in the spectrum below the new Nyquist frequency need be lost and you will get a set of new pixels (samples) that should not show any pixellation once displayed with the appropriate post filtering.
Note: The process of re-sampling that involved just leaving out or repeating samples was last used in the old days of movie film when the length of a film sequence or its shooting rate, needed to be projected at standard rate. Then, frames were crudely repeated or deleted.All the information in an image that's been resampled can be reproduced perfectly except when reducing the sample rate (number of pixels) because the Nyquist Criterion has to be followed by suitable pre-filtering of the first stored image. Actually, simple Nyquist filtering is not always even necessary for some images because aliases are not necessarily an impairment. Aliases in normal photographs can be much more of a problem because of regular patterns which we don't see in astrophotography. Intelligent image processors can deal with a lot of that.
Calling all this 're-sizing' is misguided and over simplistic . This process of 'zooming' is re-sampling and using the right term makes it clear what's going on
 
  • #165
Now if I enlarge and crop option 4...
Devin-M said:
4) Shot in simulated 1080p HD 16x9 ratio, cropped to 620 height, 3x2 ratio prior to uploading, reduced by server to 800 width
1080p_cropped_to_620p_3x2_enlarged.jpg


To the same apparent size as option 3...
Devin-M said:
3) Full frame image (7360x4912 jpg), cropped to 620 height, 3x2 ratio prior to uploading, reduced by server to 800 width
4912p_cropped_to_620p_3x2.jpg


...the loss of image quality in option 4 (1st picture) can be easily observed, which I believe is the same loss the OP experienced by shooting in 1080p HD mode rather than RAW...
 
  • #166
Devin-M said:
Now if I enlarge and crop option 4...
What I see is the same viewed image size at different resolutions (pixel size ) and subjected to some form of processing which has a name but no definition.

If you fire up your Photoshop or equivalent and load an image. Go to the 'crop' tool and it will allow you to select a portion of the full image. Unless it thinks it knows best what you want, you will be left with the portion you chose and it will have the same pixel size. That is why I call cropping cropping. If you choose to expand to fill the screen, the image will (should ) have the same pixel dimensions. You can change the pixels per inch in Image Size option. In PS you can 're-size' the image to fit whatever printed image you might want and you also have a choice of pixels per inch. The two quantities - size and resolution are independent.
As far as I'm concerned, Adobe is God in these matters and their notation is pretty universal. Their image sizing can be done with various algorithms iirc.
 
  • #167
To "crop" but not also "resize" in photoshop you have to choose the ratio of the crop but not the final pixel dimensions... So then if you know the final display will be 800px width, and your crop in ratio mode ends up at 800px width, then you'll get a 1 to 1 ratio of sensor pixels to display pixels in the final image which should result in the lowest possible degradation in quality if you're imaging a low angular dimension object like Saturn.

ratio-crop.jpg
 
  • #168
Devin-M said:
Resizing to a larger size doesn’t necessarily lose any information especially when interpolation is disabled.
I read this again and, in the context of PS etc. it doesn't really mean anything unless you specify whether or not the pixel count of the image is increased so as to keep displayed pixel size the same. I can't think how you would be able to achieve any arbitrary value of resizing without some form of interpolation filtering. The positions of the original samples were defined by the source image array. How could you 'resize' the image just by adding or subtracting a pixel, every so often?
 
  • #169
So if you know the final display width is 800px width, then while in ratio crop mode (in this case 3:2) you select an area which is 800px in width and you'll be cropping without resizing.

799px.jpg
 
  • #170
Devin-M said:
So then if you know the final display will be 800px width, and your crop in ratio mode ends up at 800px width, then you'll get a 1 to 1 ratio of sensor pixels to display pixels in the final image which should result in the lowest possible degradation in quality if you're imaging a low angular dimension object like Saturn.
I think you are underestimating the capabilities of processing apps these days. I now see what you were getting at. You are implying that you have to choose your scaling so the pixels have an integer ratio. If it were as simple a system as you imply then how would a photographer be able to mix images of arbitrary original sizes and pixel resolutions and scale / distort them so that the result doesn't show the jiggery pokery? The processing has to go far deeper than that by dealing with reconstructed internal images before there would be any chance of PS doing the excelling editing job it does.
There is no harm in doing PS's thinking for it but why? You would have a serious problem stitching images of a large object like the moon together, for instance.

BTW is that your shot of the humming bird? Nice and you are a lucky devil to have them around.
 
  • #171
sophiecentaur said:
I can't think how you would be able to achieve any arbitrary value of resizing without some form of interpolation filtering. The positions of the original samples were defined by the source image array. How could you 'resize' the image just by adding or subtracting a pixel, every so often?
Non-Interpolated Resize (Enlarge) "Nearest Neighbor":
non-interpolated.jpg


Interpolated Resize (Enlarge):

interpolated.jpg
 
  • #172
sophiecentaur said:
BTW is that your shot of the humming bird? Nice and you are a lucky devil to have them around.
Thank you, yes I took these in my backyard in RAW with a Nikon D800, Nikon 300mm f/4.5 w/ a Nikon TC-301 2x teleconverter for effective 600mm f/9 on a cloudy day at about 1/500th sec 6400iso.
 
  • Like
Likes sophiecentaur
  • #173
It looks like tonight is my only chance this month before the moon comes back…

5FD89ECC-ED57-430A-BEE8-490B722C6999.jpeg
 
  • #174
Devin-M said:
So if you know the final display width is 800px width, then while in ratio crop mode (in this case 3:2)
That's interesting. So it's the PS adventure game! I can't find that particular door. From the image you posted of a PS screen, that box should drop down from the Edit button? My Resize button gives me the usual size and resolution options. Where does the other list come from?
Is it a plug in?

I still don't think that sort of special under sampling will deal with many of the functions that we use for PS - even a simple trapezium stretch will change the frequencies and the effective pixel spacing so there's no longer a simple ratio.

People seem to be trying to use inadequate equipment, imo. Raw images, Tiff and AVI are worth paying for for 'show pictures'. There are many low cost CMOS and CCD cameras which can be driven by a bog standard laptop (not always conveniently by macOS, though! grr. The sensor on an amateur DSLR is too big or not HD enough unless you use a very expensive scope.
 
  • #175
The "ratio" refers to the ratio of the # pixels in the width vs # of pixels in the height... So whatever you crop in ratio mode, the pixels that remain should remain unchanged (unresized) from the cropping operation (if you used 3:2 ratio the height will be 2/3rds the width - which is standard DSLR / mirrorless framing). The reason in this case you would choose 800 pixels for the width while cropping in ratio mode is if you choose a larger number, it will be downsized by the max 800px width on this website. If you choose an area that's smaller than 800px in ratio mode, that also won't change the original pixels, but the image won't fill the max available space on this website. So if you crop to 800px width or less in ratio mode (and the height is less than 620px) you will end up with each pixel shown on this site corresponding with a single pixel from the image sensor.
 
  • #176
Devin-M said:
the max available space on this website
I don't understand how a particular website and how it displays images is of any consequence to real photography. If you want to show people your images in all their glory then you send them your own large files. The vagaries of a website just can't be trusted so why bother with it - if quality is important?

There is a phrase "Procrustean bed" which applies here, I think.

Plus, I would love to know how to access that drop down of resolution choices
 
  • #177
It's necessary to know the final display resolution so the actual pixels don't get resized to a lower resolution when you upload it if you want the highest possible angular resolution from the sensor to your eyeballs (this is called a 100% crop). You would lose resolution if you chose a 1000w area and tried to upload it for an 800w final display.

After selecting the crop tool from the menu on the left, select a ratio from the drop down menu circled in red.

You'll find that the pixels won't resize in this operation. Since I've selected an area that's the same width as the final display width, I'll get each pixel on the sensor displaying on one pixel on the final display on this web page, which theoretically gives the best resolving power from sensor to eyeball.

ratio-crop.jpg


800px2.jpg
 
  • #178
By 4am I’d been wiping dew off 3 lenses every 20 minutes for 8 hours… I think I ended up with 3 targets and over 20 hours of observation over the course of 9 hours… I depleted 2 full sets of batteries last night so I’m still shooting dark frames in the refrigerator this morning… Still have loads of processing to do til I have something to show for it…

8D9FE5DA-38B7-4BED-8A6A-D94B092A6B22.jpeg

C1427BB4-8767-4E68-ADD0-E35F7E517923.jpeg

DE79D137-249F-4012-A9AF-3DD1F638320E.jpeg

38ED4BE3-7CB2-470F-9A56-0636B9D9E638.jpeg
 
Last edited:
  • Like
Likes Drakkith
  • #179
Flying Bat Nebula - Ha-RGB Composite - 300mm f/4.5 on 35mm sensor
12x20min (4 hrs) 6nm Ha Filter @ 6400iso
60x1min (1hr) RGB (no filter) @ 3200iso

flying-bat-ha-rgb2.jpg


100% Crop

flying-bat-ha-rgb-100pc-crop-1-jpg.jpg


Orion + Assorted Nebulas Ha Filter - 24mm f/2.8 on a 35mm sensor
12x5min (1hr) @ 6400iso

orion_ha-jpg.jpg


100% crop

orion_ha-crop-1-jpg.jpg
 
Last edited:
  • Like
Likes Drakkith
  • #180
4182784.png

4182784-1.png

5883434-1.jpeg

5883434.jpeg

https://www.speakev.com/cdn-cgi/image/format=auto,onerror=redirect,width=1920,height=1920,fit=scale-down/https://www.speakev.com/attachments/flying-bat-ha-rgb_1600-jpg.152139/
 
Last edited:
  • Like
Likes Drakkith
  • #181
Devin-M said:
You'll find that the pixels won't resize in this operation.
That's true because it's what the crop operation does. The problem is that areas of images often need to be scaled at different rates. Dealing with and introducing spatial distortions must always involve non- integer sample rate changes. Although the simple technique of linear scaling can be used for many astro images, you can only do it for long focus lenses if you want to stitch images together because of barrel and pincushion distortions in the overlap.
However, I wonder how relevant the image quality deterioration can be when dealing with true bit images of any astro object. Stacking will add jitter and reduce such problems which, to be honest, should only be noticeable when interpolation filtering doesn't use enough nearby pixels.

I do acknowledge, however, if you are aiming to get the best from a website which uses crude up or down scaling then you need to provide them with images of correct pixel numbers and image dimensions.

I can't find much about the process of image sampling on Google; there's the usual hole in the information with descriptions of how to use Photoshop etc - but not what it actually does (that's worth a lot of money to them) and, at the other extreme, applications of pixel processing for specific applications such a facial resolution. I have no access to appropriate textbooks and searching for information at that level is a real pain. But sampling theory (multidimensional) does tell us that, subject to noise, the original (appropriately filtered) original image can be reconstructed perfectly with the right filtering. You only need to look at the best up-scaled TV pictures to see how good things can be.
 
  • #182
Here's an easy way to measure what your sensor can theoretically do... In my case I think I'm more limited by the lens than the sensor.

I took an RGB source image from last month which is 600mm f/9.

100pc-crop.jpg


Next I did a 100% ratio crop of an 800w x 620h patch from the RAW file, so the pixels have not been resized and the pixel dimensions are the same as the maximum allowed dimensions on this web page so I should be getting exactly 1 pixel from the sensor for every pixel displayed on this webpage...

600mm-f9-100pc-800w-d800.jpg


Next I uploaded the image to http://nova.astrometry.net/upload ...

4184584.png


4184584-1.png


4184584-2.png


5885661-1.jpeg


5885661.jpeg


Now I have the measurement of the sensor's arcsec/pixel capability with the 600mm f/9 lens fitted...

4184584-3.jpg


Center (RA, Dec):(314.139, 44.664)
Center (RA, hms):20h 56m 33.411s
Center (Dec, dms):+44° 39' 51.833"
Size:22.4 x 17.3 arcmin
Radius:0.236 deg
Pixel scale:1.68 arcsec/pixel

The lower the arcsec/pixel the finer the detail one can resolve assuming they have a perfect lens...
 
Last edited:
  • #183
I need a better lens before I need a better sensor...
 
  • #184
Devin-M said:
Now I have the measurement of the sensor's arcsec/pixel capability...
If you use a different lens the arcsec/pixel ratio will change. I think that's what's normal to specify pixel size and number of pixels along each axis. The quantity is more 'portable'.
Devin-M said:
I need a better lens before I need a better sensor...
With your system, that's X3 cost. Ouch!
 
  • #185
sophiecentaur said:
With your system, that's X3 cost. Ouch!
My optical tube assemblies are only worth about $316 USD each (used - Nikon 300mm f/4.5 $161 & Nikon TC-301 2x teleconverter $155)

I think the OP could estimate whether it’s better to shoot through the eyepiece or prime focus by measuring the arcsec/pixel of both options and then also counting how many pixels wide the stars are with both options.
 
  • #186
I did a ratio crop of a single dim star at 600mm f/9 so the pixels from the sensor weren't resized, and then I did an interpolation-free enlargement using "nearest neighbor" as the resampling algorithm to 620px height...

single-star-3.jpg


Then I counted how many pixels wide a dim star was (14 pixels)...

single-star-600mm-f:9-14px-star.jpg
So now I know something about how good the sensor is at a given focal length (in this case 1.68 arcsec/pixel at 600mm) and I know how good the lens is -- stars which should only cover a single pixel have a radius of 7 pixels, so I think the lens needs to be around 7x sharper before I could get more detail with a denser sensor...
 
  • #187
I think if you multiply the dim star pixel radius by the arcsec/pixel, and then test 2 different options, whichever result comes out to a lower # suggests to me you'll be getting more detail with that option...

In the test above I got 7px radius star x 1.68 arcsec/pixel sensor/lens combo = ~11.7 arcsec/pixel effective resolution considering the flaws in the optical tube.
 
Last edited:
  • #188
For comparison, on my most recent shoot of the Flying Bat Nebula, I used the same 300mm f/4.5 lens, but I didn't use the TC-301 2x teleconverter...

100% ratio crop 800w x 620w
flying-bat-ha-rgb-800x620-crop.jpg


4192067.png


4192067-1.png


4192067-2.png


5894566-1.jpeg


5894566.jpeg


Which gives:

Center (RA, Dec): (317.713, 60.755)
Center (RA, hms): 21h 10m 51.124s
Center (Dec, dms): +60° 45' 18.476"
Size: 44.6 x 34.5 arcmin
Radius: 0.470 deg
Pixel scale: 3.34 arcsec/pixel

So I know I'm getting 3.34 arcsec/pixel projected onto the sensor...

Now I enlarge a dim star to 620h with no interpolation / "nearest neighbor" --

flying-bat-ha-rgb-single-star.jpg


The star radius is about 3 pixels...

If I multiply 3px star radius times 3.34 arcsec/pixel, I get around 10.02 effective arcsec/pixel. That's slightly better than the 11.7 effective arcsec/pixel I get by doing the same test with the 2x teleconverter added which makes me think it probably isn't worth using the 2x teleconverter to try to get better resolving power on a target.
 
  • #189
I just ordered a 6" 1800mm focal f/12 Meade LX85 Maksutov-Cassegrain OTA for $779 USD... we'll have to see what that does when it shows up in the mail...

meade-lx85-m6-ota-1_copy.jpg
 
  • Like
Likes collinsmark
  • #190
I received the 1800mm f/12 today… I made a little video from a few test frames showing the atmospheric wobble from shooting a mountaintop (Mt. Lassen in Northern California) which is about 44 miles away…

The telescope feels heavier than I was expecting though I haven’t weighed it yet. I was able to get it perfectly balanced in both the RA and Dec axis on my cheap tracker but I had to do some “creative” rigging to achieve this and I estimate I’m 2-3x over the weight limit of the tracker but I expect it will work. It’s a good thing I have more than one of these trackers as I had to use an extra counterweight from another one in addition to using a camera with a 600mm f/9 lens + video fluid pan head as additional counterweight (i’m not intending to image through the counterweight camera its just there for balance).

There’s a bit of vignetting and dust but I will be correcting these with flat frames when I do astrophotography. Here’s a few pictures including an un-retouched full frame, a 100% crop and some pictures of the setup.

I mounted the camera at prime focus with a t-ring for nikon dslr’s and a t-adapter. The camera is a Nikon D800 with a 35mm full-frame 36MP sensor and the telescope is a Meade LX85 1800mm f/12 (6 inch aperature) Maksutov-Cassegrain reflector.

F822C5F5-0E38-4C2C-9098-B7669AD4E9C8.jpeg

7BA17375-094C-4FAC-B5F2-EB04CDA9CD78.jpeg

23980EA5-9BC7-4CB4-A37F-8290D19FCB5F.jpeg

125B53D5-419E-41CC-8E69-BF4BDFB97C45.jpeg

0F3695C5-1666-4F08-A10F-E6AED56070FC.jpeg

87FCF9C9-3CD0-4D28-AB19-C14429FF7043.jpeg
A874F118-9A74-4AF4-B4FB-4FC7445CC025.jpeg

0178C2CB-6A87-4E9D-BE6A-CFCD92833E1B.jpeg

234F68CC-9E94-4181-931D-3854582CB502.jpeg
 
Last edited:
  • Like
Likes collinsmark
  • #191
sophiecentaur said:
There you go! Camera characteristics are things you just have to buy your way out of...

PS I was wondering whether a large set of still images might give you enough for dealing with 'planetary problems' and give you inherent high res.
No doubt my camera is very beginner level and I will soon grow out of it. As you said my camera is more suited for nebula imaging. But the clouds here are pierced only by planets. Gotta wait for another month for winter skies.

I tried stacking raw stills. It is a huge effort to get the same number of frames I get from a 50 fps video. So my probability of catching "lucky" frames are always less. Looks like fps wins over resolution in planetary.

Devin-M said:
One way you can test if you’re losing resolution in 1080p mode…
When I shift from 4K to 1080p FHD mode, I can see the framing change. The subject is now more zoomed in, which indicates that it is cropping the sensor resolution down to the central part.

So the planet resolution is the same (pixels per arc-sec), but I'm losing details with the MP4 algorithm.

One thing I can improve without spending money is by spending some time and waiting for better skies. Now Its moisture-laden and quite hazy.
 
  • Like
Likes Devin-M
  • #192
I started to worry about the overall sharpness after yesterday’s test of the 1800mm f/12, but after some new tests I’m not so worried. I think most of the loss of sharpness yesterday was on account of the 44 miles of dense atmosphere I was shooting through towards the mountaintop.

I have 4 test images, the 1st is a 100% crop of an 800px w x 620px h section of a test image of the top of a tree down the street from my home (about 618ft away - shot at 6400iso 1/2000th sec), so this one should show the actual pixels captured by the sensor 1:1 on this webpage with no resizing. The second image shows the full frame although I had put the camera into 1.5 crop sensor mode to crop out the vignetting so even though its a full frame 35 mm sensor we are only seeing the APS-C sensor sized central portion of that sensor @ 4800px x 3200px before upload, and the third image is taken through the “telephoto” lens in my iPhone 11 Pro, 100% crop and the fourth image is the iphone telephoto full frame…

7A35ADE4-40AB-4B08-AEB8-D6FC6C5BB336.jpeg
DAE78B37-1E34-4F65-981C-7BE52DE4EE14.jpeg

IMG_8023.jpg

B6490854-B24E-4917-9774-416E2DE76294.jpeg


…looks pretty sharp to me!
 
Last edited:
  • Like
Likes collinsmark
  • #193
Do you see any sharpness difference between the image and eyepiece live?

I'm also now realising how important the focus mechanism is, to get really sharp images.
 
  • #194
saturn_stacked_3.gif


1800mm f/12 6400iso, 50 raw x 1/320th sec, nikon d800 @ prime focus, meade lx85 maksutov-cassegrain, 100% crop, 448px x 295 px, 0.56 arcsec/pixel
 
Last edited:
  • #195
PhysicoRaj said:
I'm also now realising how important the focus mechanism is, to get really sharp images.
This is well known to astrophotographers and they use, either a Bahtinov mask or, when they are using a PC to control things, they autofocus with the software.
It's largely why people are prepared to spend so much on a good focuser that has a steady action and which doesn't creep during a session.
 
  • #196
sophiecentaur said:
This is well known to astrophotographers and they use, either a Bahtinov mask or, when they are using a PC to control things, they autofocus with the software.
It's largely why people are prepared to spend so much on a good focuser that has a steady action and which doesn't creep during a session.
I 3D printed a Bahtinov and it works like a charm. The only issue is the telescope focusing mechanism itself, which needs to be finer I feel. I have seen some people add a DIY mod like a bigger wheel to get precise focus, have to try that and see.

Apart from that, since my scope is an achromat and not an apochromat, I could be seeing one colour plane out of focus, which could be reducing the overall sharpness??
 
  • #197
I think my sharpness on Saturn was being limited a bit by atmospheric dispersion based on the blue fringing at the top and red fringing at the bottom…. Saturn was quite low to the horizon while I was imaging. They make a corrector for that but I’m not sure I’m ready to fork over the cash for it quite yet…

From an “Atmospheric dispersion corrector” product description:

https://www.highpointscientific.com...bvBUFd5kWBdkDaTrcH--FnAxjJfJbjEMaAnYfEALw_wcB

The ZWO ADC, or Atmospheric Dispersion Corrector, reduces prismatic smearing during planetary imaging, resulting in images with finer details. It also improves the image when doing visual planetary observations, allowing the observer to see more surface detail.

Optical dispersion is an effect caused by the refractive quality of the atmosphere as light passes through it, and is dependent on the angle of the light as well as its wavelength. Optical dispersion spreads the incoming light into a vertical spectrum of colors, causing the object to appear higher in the sky than it truly is. The amount of “lift” that occurs is exaggerated when objects are closer to the horizon, and because optical dispersion is wavelength dependent, it causes the image to separate into different colors. That is why you will see a bluish fringe on the top of an object, and a red fringe at the bottom when atmospheric dispersion effects are particularly bad.

A correctly adjusted ADC, placed between the camera or eyepiece and a Barlow lens, will reduce the effects of optical dispersion and improve image resolution. It does this by applying the opposite amount of dispersion caused by the atmosphere to the image and then re-converging the light of the different wavelengths at the focal plane.


---

4x "nearest neighbor" enlargement (current distance 1.51 billion kilometers - 0.56 arcsec/pixel):
saturn_stacked_3_4x_2.jpg
One thing I'm quite happy about is I was 2-3x over the weight limit on my cheap $425 Star Adventurer 2i Pro tracker but it still worked...

94C131B3-847B-4591-A86C-1D5415EBD5EE.jpeg


A373797A-8137-492B-AF8F-730D9EFC528E.jpeg
 
Last edited:
  • Like
Likes Drakkith and collinsmark
  • #198
 
  • #199
Devin-M said:
I think my sharpness on Saturn was being limited a bit by atmospheric dispersion based on the blue fringing at the top and red fringing at the bottom…. Saturn was quite low to the horizon while I was imaging. They make a corrector for that but I’m not sure I’m ready to fork over the cash for it quite yet…

Yes, I use the ZWO atmospheric dispersion corrector (ADC) for pretty much all my planetary work. It does help a fair amount, but it's not a panacea. It does work though, for what it's worth. I found the money to be well spent.

---

The other main factor (from what I can tell from your Saturn image/video) is probably atmospheric seeing. Atmospheric seeing conditions vary quite a bit from night to night, and they're not necessarily correlated to cloud cover. I.e., you can have nights with good seeing and bad clouds in the sky, clear skies with bad seeing, bad seeing and clouds, or (sometimes hopefully) clear skies with good seeing.

For any given night, the best seeing around a target will usually be when the target crosses the meridian, because that is when the target is highest in the sky, and thus has less atmosphere to pass through.

----

Once you have your raw video data (ideally on a night/time with relatively good seeing), process that data with a lucky imaging program such as Autostakkert! (it's free software). That software will throw away a fraction (say maybe 50% of the frames -- whatever you specify), and warp the remaining frames such that they stack nicely. Then it stacks them producing a single image as an output. I suggest using a high-res uncompressed format for you image such as .TIFF. (And just to be clear, a video goes in \rightarrow an image comes out.)

At that point, you image will still be blurry, but now you can coax out the detail using wavelet sharpening in a program such as RegiStax. (RegiStax is also free software). Don't use RegiStax to do the stacking, since you've already did that using AutoStakkert! Instead, just open up the image and go directly to wavelet sharpening.

The difference between any given raw frame, and the final image out of RegiStax can be remarkable.

[Edit: Oh, my. I'm sorry if this post was off-topic. When I posted it, I thought this was the "Out Beautiful Universe -- Photo and Videos" thread. :doh:]
 
Last edited:
  • #200
collinsmark said:
Once you have your raw video data (ideally on a night/time with relatively good seeing), process that data with a lucky imaging program such as Autostakkert! (it's free software). That software will throw away a fraction (say maybe 50% of the frames -- whatever you specify), and warp the remaining frames such that they stack nicely. Then it stacks them producing a single image as an output. I suggest using a high-res uncompressed format for you image such as .TIFF. (And just to be clear, a video goes in → an image comes out.)

At that point, you image will still be blurry, but now you can coax out the detail using wavelet sharpening in a program such as RegiStax. (RegiStax is also free software). Don't use RegiStax to do the stacking, since you've already did that using AutoStakkert! Instead, just open up the image and go directly to wavelet sharpening.

The difference between any given raw frame, and the final image out of RegiStax can be remarkable.

Thank you for your amazing suggestions!

I did 4 things:

1) I threw out quite a few of the more blurry raw files before stacking
2) I did the wavelet sharpening (amazing results in this step)
3) I individually nudged each of the color channels into alignment to correct the atmospheric dispersion
4) I did some final noise reduction and adjustment filters in Adobe Lightroom

After "Lucky Imaging" Selection - 63 RAW Images, Wavelet Sharpening, Channel Nudge and Noise Reduction (4x "nearest neighbor" enlargement):
saturn_wavelet_sharpened_channel_nudge_noise_reduction_4x.jpg


Stacked RAW Images (4x "nearest neighbor" enlargement):
saturn_stacked_3_4x.jpg


Typical Source Raw Image - All Noise Reduction Disabled (4x "nearest neighbor" enlargement):
1800mm f/12 6400iso 1/320th sec
saturn_source_4x.jpg
 
Last edited:
  • Like
Likes Drakkith and collinsmark
Back
Top