A Question about Imaging and PSFs

  • Context: Graduate 
  • Thread starter Thread starter cepheid
  • Start date Start date
  • Tags Tags
    Imaging
Click For Summary

Discussion Overview

The discussion revolves around the implications of point spread functions (PSFs) in imaging, particularly in astronomical contexts versus everyday terrestrial imaging. Participants explore why certain imaging artifacts are more noticeable in specific scenarios, such as bright point sources compared to dimmer stars or extended objects like galaxies and nebulae. The conversation also touches on the effects of pixelation in imaging systems and the relationship between pixel size and PSF.

Discussion Character

  • Exploratory
  • Technical explanation
  • Conceptual clarification
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants suggest that the visibility of PSF artifacts, such as rings and spikes, is primarily an issue for diffraction-limited images and is more apparent for bright point sources against a dark background.
  • Others argue that dimmer stars may not exhibit these features because they fall below the detection limit of the imaging detector.
  • It is proposed that extended objects like galaxies and nebulae may appear clearer because their diffraction artifacts are less noticeable due to their overall brightness and the way they are imaged.
  • One participant notes that everyday imaging does not seem to suffer from these concerns because typical scenes are complex enough to mask optical aberrations.
  • Another participant introduces the idea that introducing discrete detectors invalidates the assumption of shift-invariance in optical systems, complicating the convolution model.
  • There is a discussion about the relationship between pixel size and PSF, with some suggesting that smaller pixels can help approximate linear shift-invariance under certain conditions.
  • One participant mentions that consumer electronics cameras, despite their small lenses, can produce satisfactory images due to blurring and compression effects that obscure potential artifacts.

Areas of Agreement / Disagreement

Participants express a range of views on the visibility of PSF artifacts and the effects of pixelation in imaging systems. There is no clear consensus on the implications of these factors, and the discussion remains unresolved regarding the nuances of imaging theory and practice.

Contextual Notes

Limitations include the dependence on specific imaging conditions, the complexity of the scenes being imaged, and the unresolved mathematical implications of pixelation in relation to PSF.

cepheid
Staff Emeritus
Science Advisor
Gold Member
Messages
5,197
Reaction score
38
Hi,

If every point in the image plane is convolved with the PSF, why is it that this is only obvious in certain cases?

Take astronomical imaging: for images of bright point sources (e.g., the brightest stars), we see rings, spikes etc. Why do we not see these features for dimmer stars? Furthermore, what about images of extended objects? Why is it that galaxies and nebulae look fine, and don't look like some sort of blurred mess?

Also, what is it fundamentally about everyday/terrestrial imaging that makes it so that these concerns don't seem to matter at all? Why is it that I can feel confident that more pixels = a sharper image, without having to worry about the actual *optics?* One would think that the miniscule lenses included with ever smaller consumer digital electronics would offer pretty lousy angular resolution.
 
Science news on Phys.org
cepheid said:
If every point in the image plane is convolved with the PSF, why is it that this is only obvious in certain cases?
It's only a problem for diffraction limited images. And only obvious for point on a dark background.
Take astronomical imaging: for images of bright point sources (e.g., the brightest stars), we see rings, spikes etc. Why do we not see these features for dimmer stars?
Because they are below the detection limit for the detector. If only 0.1% of the energy goes into the spikes you might see if for a 6mag star but not a 25mag galaxy.
Furthermore, what about images of extended objects? Why is it that galaxies and nebulae look fine, and don't look like some sort of blurred mess?
They are a blurred mess at the scale < arcsec

Also, what is it fundamentally about everyday/terrestrial imaging that makes it so that these concerns don't seem to matter at all? Why is it that I can feel confident that more pixels = a sharper image, without having to worry about the actual *optics?*
The optics generaly aren't diffraction limited an the scenes are normally confused enough that you don't see them. If you are one of the sad bores on photo forums who look at individual pixels on photos of test charts to prove your camera is best - you will.
One would think that the miniscule lenses included with ever smaller consumer digital electronics would offer pretty lousy angular resolution.
They are pretty bad - but this leads to blurring which combined with the heavy jpeg compression mean you don't see the effects
 
Thank you for the explanations mgb_phys. I just wanted to see if I could trust the physics and apply it to the situation in as straightforward a way as I was attempting to.

I guess for terrestrial imaging, it is, as you said, a question of not ever really having to worry about the kind of angular resolution that you need in astronomy. Nobody worries about why you can't see individual trees in a forest tens of kilometres away. Things look reasonable, like the way you'd expect them to look.

One more thing, if I may. You mentioned that the scenes (obviously much busier than a bunch of bright points on a dark background) are "confused." What exactly does that mean? I have some vague idea that the confusion limit occurs when you're looking deep enough that you see so many sources, it is impossible to distinguish them from the background noise (again speaking in an astronomy-specific context, sorry).
 
Last edited:
I didn't mean it in a technical sense, I meant that a random background of trees/people etc disguises obvious optical abberation whereas bright point sources on an empty background emphasizes them.
 
Right okay...that makes sense. Thanks for the clarification.
 
cepheid said:
Hi,

If every point in the image plane is convolved with the PSF, why is it that this is only obvious in certain cases?

Take astronomical imaging: for images of bright point sources (e.g., the brightest stars), we see rings, spikes etc. Why do we not see these features for dimmer stars? Furthermore, what about images of extended objects? Why is it that galaxies and nebulae look fine, and don't look like some sort of blurred mess?

Also, what is it fundamentally about everyday/terrestrial imaging that makes it so that these concerns don't seem to matter at all? Why is it that I can feel confident that more pixels = a sharper image, without having to worry about the actual *optics?* One would think that the miniscule lenses included with ever smaller consumer digital electronics would offer pretty lousy angular resolution.

Introducing a discrete detector (pixels) invalidates an optical system from being shift-invariant, so it is not proper to consider imaging as a convolution operation anymore.

That said, if the pixels are smaller than the PSF, one can approximate the system as being linearly shift-invariant. The rings/spikes. etc are diffractive artifacts of the aperture, and depending on how the overall brightness of the image is scaled, the details of dimmer objects can be lost- note that on order to view these artifacts, there is usually blooming present in the central peak. There's no contradiction with imaging points and extended objects- the diffraction artifacts may be lessened by the fact that those "side-lobes" are much dimmer than the center peak, and get washed out by imaging extended objects- the image will simply appear blurry.

Pixelated imaging systems can behave very differently from continuous systems- aliasing is the main effect people recognize, and the key is proper matching of the pixel size to the PSF, something that is accomplished by adjusting the numercal aperture of the system. An excellent resource for this topic is "Analysis of Sampled Imaging Systems" by Ronald Driggers (SPIE proess). But yes, those little cameras in consumer electronics are quite impressive- I wouldn't mind seeing the optical layout.
 

Similar threads

  • · Replies 43 ·
2
Replies
43
Views
12K
  • · Replies 2 ·
Replies
2
Views
7K
  • · Replies 34 ·
2
Replies
34
Views
14K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 17 ·
Replies
17
Views
7K
  • · Replies 5 ·
Replies
5
Views
3K