Diffraction Effects and Artifacts in Telescopes like the JWST

AI Thread Summary
The hexagonal shape observed in bright stars from the JWST images is primarily due to diffraction artifacts caused by the telescope's internal optics. While all stars exhibit these diffraction effects, they are more noticeable in brighter stars due to saturation and image processing techniques. Dimmer stars may still have the same hexagonal artifacts, but they blend into the background, making them less visible. The discussion highlights that the appearance of these artifacts can vary significantly based on how the raw data is processed. Overall, diffraction artifacts are a consistent feature across JWST images, influenced by brightness and processing methods.
collinsmark
Science Advisor
Homework Helper
Messages
3,434
Reaction score
3,285
[Mentor Note -- Discussion spun off from the main JWST thread]

pinball1970 said:
What is this top right? It looks very symmetrical in shape and colour, just an optical effect from the telescope like diffraction spikes?

Yes, the apparent hexagonal shape of that star is due to the diffraction within the telescope optics (optical filter characteristics may also have played a role).

For clarity, we're talking about this star:
1666253518991-png.png


sophiecentaur said:
But why aren’t all (bright) star images like it?

They are (images from the same telescope optics, that is). You can even see it in the main subject star in the original image:
1666253408455-png.png


Ignore the dust rings for a moment, and concentrate on the central star. Its apparent shape is a hexagon, just like the other star (same size even). It's just that it's so much brighter that all the color detail is saturated (blown highlights), so it just looks like a white hexagon.

You might not see the hexagon shape in the other stars because they are relatively dimmer. With the dimmer stars, the outer 6 hexagonal components more easily blend into the background. But they're technically still there. You might be able to spot more if you look closely.

Much of what gets saturated and what doesn't, is not only dependent on the telescope's optics hardware (which may include the optical filters used) and exposure characteristics, but also on the image processing details, such as how the histogram was stretched before the final image shown.
 
Last edited by a moderator:
  • Informative
  • Like
Likes mfb and pinball1970
Astronomy news on Phys.org
collinsmark said:
For clarity, we're talking about this star:
That's a help. I was talking about the 'dust rings'.
collinsmark said:
Ignore the dust rings for a moment,
But aren't they the most remarkable feature?
collinsmark said:
You might not see the hexagon shape in the other stars because they are relatively dimmer.
Problem is, there are thousands of JWST star images with a range of exposure values.

There are a number of odd shaped objects out there - take this square nebula, for example. That's not considered to be an optical artefact. Diffraction effects are very repeatable; all that's needed is for the brightness of the object to be appropriate - just as the diffraction spikes in images from Newtonian scopes only seem to be present on the brighter stars. I'm not saying that the image cannot be due to diffraction - just that, if it were, there would be many many other examples of the same thing.
 
sophiecentaur said:
But aren't [the dust rings] the most remarkable feature?

Yes, of course. The dust rings in the image are from real, physical, dust rings in space.

The hexagonal appearance of the (bright) stars are diffraction artifacts caused by JWST's internal optics.

sophiecentaur said:
Problem is, there are thousands of JWST star images with a range of exposure values.

There are a number of odd shaped objects out there - take this square nebula, for example. That's not considered to be an optical artefact. Diffraction effects are very repeatable; all that's needed is for the brightness of the object to be appropriate - just as the diffraction spikes in images from Newtonian scopes only seem to be present on the brighter stars. I'm not saying that the image cannot be due to diffraction - just that, if it were, there would be many many other examples of the same thing.

Yes, I believe we are in agreement on this. There was a question about diffraction artifacts, and an example of a star in the image showing those artifacts. That hexagonal artifacts (was well as any diffraction spikes) have a different cause than the actual dust rings. That's all I meant to say.

--------------
Edit: On a slightly different note, I should point out that diffraction artifacts and diffraction spikes are actually present on all the stars and all the objects on every JWST image; it's just that they are usually too dim to show up in the final processed image on all but the brightest stars.

But if you were to take the original raw data and stretch it to its limits, you would see these artifacts make up everything in the image (well, at least until the dimmer artifacts blur into the noise floor of the sensor).

It's almost like painting a picture with a brush that is hexagonal shaped, has a much brighter spot in the very center, and has dim diffraction spikes out on the sides. If you wanted to paint a dim star, you barely touch the brush against the black canvas, and only the center dot shows up much. But for really bright stars. the center dot as well as the hexagonal shape get blown out white, and even the spikes might begin to show through. But even things in-between, such as the nebulosity are all painted with the same brush.
 
Last edited:
collinsmark said:
That hexagonal artifacts (was well as any diffraction spikes) have a different cause than the actual dust rings.
A different cause, yes but that particular image has concentric red and green haloes. Do we see those on every start image (in addition to the six spikes from the three struts)? Anything that the telescope optics introduces would be common to many images but are those hexagonal halos common on more faint star images? This is my problem with your suggested explanation - I'd say a third effect has to be present when the sensor is operating way down its output range. (Ref that 'square nebula'
image. which is not explained by telescope diffraction.)
1666351990333.png


The bright spikes are only seen for bright stars that are saturating the image sensor at their centre; the spikes are very much fainter than that - levels of the same order as fainter stars whose spikes we do not see because they are too faint.
 
  • Like
Likes pinball1970
sophiecentaur said:
A different cause, yes but that particular image has concentric red and green haloes. Do we see those on every start image (in addition to the six spikes from the three struts)? Anything that the telescope optics introduces would be common to many images but are those hexagonal halos common on more faint star images? This is my problem with your suggested explanation - I'd say a third effect has to be present when the sensor is operating way down its output range. (Ref that 'square nebula'
image. which is not explained by telescope diffraction.)View attachment 315837

The bright spikes are only seen for bright stars that are saturating the image sensor at their centre; the spikes are very much fainter than that - levels of the same order as fainter stars whose spikes we do not see because they are too faint.

Yes, the artifacts are caused by the characteristics of the telescope's optics. The wavelength of light is also a factor, so the diffraction artifacts will manifest differently for different colors.

And the way the data is processed also impacts how easily the diffraction artifacts are noticeable. So different JWST images might show different artifacts, even though they share the same optics, just because they were processed differently.

Here's another image based on the same data (I believe, if I'm not mistaken), just processed differently, with a different stretch curve (and rotated and cropped differently).
p9krHxVwLDkbbJU9bb7XgK.jpg
 
Let me elaborate on one more thing, just for clarity.

Refer to the same image in my last post (shown again here):
p9krhxvwldkbbju9bb7xgk-jpg.jpg


For the moment, ignore the dust rings (shown in pink/orange in the image). Those are real, huge rings out in space, gigantic in size. I'm not talking about those in this post. I'm just discussing the diffraction artifacts for the rest of this post.

In the image, notice the obvious diffraction spikes of the the central star. Then there are a couple of other stars -- one to the left, and another to the lower right -- that show some diffraction artifacts, and the diffraction spikes are just barely visible. The rest of the stars don't seem to have any noticeable artifacts or spikes.

Based on this, one might be inclined to think that brighter stars produce bigger artifacts and bigger spikes in the telescope's optics. This is false. That's not how it works.

In truth, brighter stars produce brighter artifacts and brighter spikes in the telescope's optics. Brighter, yes, but not "bigger."

Yes, those two stars -- one to the left and the other to the lower right -- have the same sized artifacts and spikes as the central star. They're just so dim that you can't make them out, except for the brightest part of the artifacts.

The central star is bright enough such that the hexagonal artifact is so bright (due to the way the image was processed) it blows out the whole central region, obfuscating any detail. Again, don't try looking for a bigger hexagonal pattern from the central star. It's not bigger. The artifact is the same size as the others. It's just brighter.

And the rest of the stars? They have the artifacts and spikes too, but the stars are dim enough such that the artifacts and spikes are so dim they are not noticeable in the image.
 
Last edited:
  • Like
  • Informative
Likes mfb, sophiecentaur, pinball1970 and 1 other person
collinsmark said:
In the image, notice the obvious diffraction spikes of the the central star. Then there are a couple of other stars -- one to the left, and another to the lower right -- that show some diffraction artifacts, and the diffraction spikes are just barely visible. The rest of the stars don't seem to have any noticeable artifacts or spikes.

Based on this, one might be inclined to think that brighter stars produce bigger artifacts and bigger spikes in the telescope's optics. This is false. That's not how it works.

In truth, brighter stars produce brighter artifacts and brighter spikes in the telescope's optics. Brighter, yes, but not "bigger."
I think this is using "bigger" in a very non standard way. The image of an aberration free image of point sources will all have the same shape but its apparent observable size will depend on the intensity of the source. If you measure the size of the core of point sources image using the standard FWHM (Full Width at Half Maximum) you will get different results for different magnitude stars. Similarly if you measure the size of the diffraction spikes in the image using any normal technique the sizes will differ.

In your sense all images of objects are infinite in size as the diffraction artifacts are theoretically without bound !
Regards Andrew
 
  • Like
Likes sophiecentaur
andrew s 1905 said:
The image of an aberration free image of point sources will all have the same shape but its apparent observable size will depend on the intensity of the source. If you measure the size of the core of point sources image using the standard FWHM (Full Width at Half Maximum) you will get different results for different magnitude stars. Similarly if you measure the size of the diffraction spikes in the image using any normal technique the sizes will differ.
It depends how you processed the image.

The two images below are the same image both processed differently by myself from raw JWST sensor data:
a_00001_0.jpg

a_00001_0-2.jpg


You can see that in the second image, the diffraction spikes of the 2nd brightest star continues all the way to the top and bottom of the image frame.

Using the photoshop color sampler, I inspected the first image and the red circled area had an RGB value of 0,0,0.
sample.jpg


a_00001_0-2 copy.jpg
 
  • Like
Likes mfb and collinsmark
Devin-M said:
You can see that in the second image, the diffraction spikes of the 2nd brightest star continue all the way to the top and bottom of the image frame.
Indeed they do they have the same shape but even to a casual observer they have different widths are you really saying they are of the same dimensions? Cut and paste one in the other are they identical?

Also as I pointed out the FWHM of the stars is different. It all depends if the spikes is above the noise floor or not. Spikes from many stars in your stretched image don't reach the bottom.

Regards Andrew
 
  • #10
andrew s 1905 said:
Indeed they do they have the same shape but even to a casual observer they have different widths are you really saying they are of the same dimensions? Cut and paste one in the other are they identical?

Also as I pointed out the FWHM of the stars is different. It all depends if the spikes is above the noise floor or not. Spikes from many stars in your stretched image don't reach the bottom.

Regards Andrew
The apparently brighter star has smaller diffraction spikes than the apparently dimmer one when I process the stars differently...

dimmer.jpg

brighter.jpg
 
  • #11
The star on the right is apparently brighter, but has shorter diffraction spikes than the apparently dimmer star on the left, because each half of the image was processed differently.
comparison.jpg

a_00001_0.jpg
 
  • #12
andrew s 1905 said:
The image of an aberration free image of point sources will all have the same shape but its apparent observable size will depend on the intensity of the source.
Yes, in a linear world.
This is down to inherent non-linearity and other non linearities from image processing. No surprises. Astrophotographers are concerned with displaying whatever feature they need in order to present their results.
 
  • Like
Likes collinsmark
  • #13
Devin-M said:
The star on the right is apparently brighter, but has shorter diffraction spikes than the apparently dimmer
It's not about how you process the image. If you restrict yourself to linear processing and don't saturate the image then a properly formulated measure of size will give the same result e,g. FWHM is an example. When discussing how big something is you have to specify how you will measure it.
Regards Andrew
 
  • #14
andrew s 1905 said:
It's not about how you process the image. If you restrict yourself to linear processing and don't saturate the image then a properly formulated measure of size will give the same result e,g. FWHM is an example. When discussing how big something is you have to specify how you will measure it.
Regards Andrew
Not 'wrong' but the fact is that most images you see are subject to non-linearity / available contrast ratio. As with lithographic imaging, the brightness of an image will translate into size. In printing, this allows half tone images to be printed using a 'screen' and regular (on/off) ink.

To get equal sized spikey star images, you need to have enough quantising levels to cope with the brightest bits of the brightest star and the dimmest bits of spikes of the dimmest star. Hence we see just a few spikey stars and a lot of 'round ones' on most space images. It's a real world.
 
  • #15
sophiecentaur said:
Not 'wrong' but the fact is that most images you see are subject to non-linearity / available contrast ratio. As with lithographic imaging, the brightness of an image will translate into size. In printing, this allows half tone images to be printed using a 'screen' and regular (on/off) ink.

To get equal sized spikey star images, you need to have enough quantising levels to cope with the brightest bits of the brightest star and the dimmest bits of spikes of the dimmest star. Hence we see just a few spikey stars and a lot of 'round ones' on most space images. It's a real world.
I don't disagree but I was responding a post where it was claimed the diffraction pattern, specifically the spikes, were all the same "bigness". With image processing you can achieve almost anything. I was focused on scientific measurements.
Regards Andrew
 
  • #16
A measurement of FWHM doesn’t give you the length (pixels or arc seconds) of the entire diffraction spike.
 
Last edited:
  • #17
collinsmark said:
Based on this, one might be inclined to think that brighter stars produce bigger artifacts and bigger spikes in the telescope's optics. This is false. That's not how it works.

In truth, brighter stars produce brighter artifacts and brighter spikes in the telescope's optics. Brighter, yes, but not "bigger."
While in theory all stars or other "point sources" of a given wavelength and seen through a given aperture have the same Airy disk radius characterized by the above equation (and the same diffraction pattern size), differing only in intensity, the appearance is that fainter sources appear as smaller disks, and brighter sources appear as larger disks.[5]

Because any detector (eye, film, digital) used to observe the diffraction pattern can have an intensity threshold for detection, the full diffraction pattern may not be apparent.
https://en.m.wikipedia.org/wiki/Airy_disk
 
Last edited:
  • #18
Another way to demonstrate the diffraction spikes are always the same shape… I take my Nikon D800 DSLR and insert a clip in narrowband Oiii filter in front of the sensor (which only allows one particular wavelength through to the sensor). Next I place a Bahtinov focusing mask in front of the objective lens of my telescope, which creates diffraction spikes.

5F3469C5-9CE0-4E27-A3EB-0EB616CEC19C.jpeg

8C6D1092-2572-4490-B30E-B1AC2AF22337.jpeg

418B7AD4-93E8-4A1D-A786-8AE000B91975.jpeg
 
  • Like
Likes collinsmark
  • #19
Devin-M said:
Of course the spacing will be the same but a fainter image will not produce such long spikes because lower amplitude dots are not recordable.

Are we all talking at cross purposes here, perhaps?
 
  • Like
Likes russ_watters
  • #20
andrew s 1905 said:
If you measure the size of the core of point sources image using the standard FWHM (Full Width at Half Maximum) you will get different results for different magnitude stars. Similarly if you measure the size of the diffraction spikes in the image using any normal technique the sizes will differ.

In your sense all images of objects are infinite in size as the diffraction artifacts are theoretically without bound !
Regards Andrew
This is not true for linear data.

As a matter of fact, that's the very reason why full width half maximum (FWHM) is such a useful parameter: So long as the star is exposed below the point of saturation, and so long as the star is exposed enough such that the noise floor is negligible -- in other words, so long as the star's exposure is within the linear region -- the star's FWHM is independent of the star's brightness (brightness due to its inherent magnitude or due to exposure characteristics). The star's FWHM doesn't change with brightness so long as the star's image is still linear.

That's why FWHM is such a good measure to be used by autofocus algorithms for telescope autofocusers.

In your sense all images of objects are infinite in size as the diffraction artifacts are theoretically without bound !

Yes, in principle that is what I am saying -- that is ideally, when the data is linear and we can ignore saturation and noise floors. Diffraction artifacts are in that sense without bound.
 
Last edited:
  • #21
When you factor photon shot noise, an individual photon from a dim star should have an equal probability of ending up in a given “diffraction point” as an individual photon from a bright star. For example, a single photon from a dim star could end up striking the sensor farther from the star’s apparent position due to shot noise and diffraction than a single photon from a bright star.

https://en.m.wikipedia.org/wiki/Shot_noise
Photon-noise.jpg

Photon noise simulation. Number of photons per pixel increases from left to right and from upper row to bottom row.

418b7ad4-93e8-4a1d-a786-8ae000b91975-jpeg.jpg
 
  • Like
Likes sophiecentaur
  • #22
In relevant ways, this is going to be a repeat of the points I made in my last post:

andrew s 1905 said:
It's not about how you process the image. If you restrict yourself to linear processing and don't saturate the image then a properly formulated measure of size will give the same result e,g. FWHM is an example. When discussing how big something is you have to specify how you will measure it.
Regards Andrew

Full Width Half Maximum (FWHM) is a prime example of what I'm talking about. In the linear domain, the FWHM of a star's image does not change with the star's brightness. The brightness might change, but the FWHM does not.

The same is true with diffraction artifacts. Brighter stars will produce brighter diffraction artifacts. But if you scale the brightness of the diffraction artifacts linearly with the brightness of the imaged star, nothing changes.

This of course assumes the [relevant objects in the] image is [are] in the linear region, such that saturation and the noise floor can be effectively ignored.
 
Last edited:
  • Like
Likes sophiecentaur
  • #23
collinsmark said:
This is not true for linear data.
Where would you expect to find "linear data"? As you point out, using half power radius is a useful tool because half power allows guiding with dim stars but that's not the same as claiming that spikes (which actually go on for ever, theoretically) will all be visible for all stars. Your data near the peak can be considered to be linear but it eventually ends up being non-linear

The bit depth of the cameras appears to be 16, which corresponds to a maximum relative magnitude of about 12, at which point the artefacts will disappear. That implies a star that is exposed at the limit will only display artefacts down to a magnitude of 12, relative to the correctly exposed star.
'Seeing' is excellent at a Lagrange point but scatter (artefacts, if you like) from nearby bright stars can limit how far down you can go and I suspect that that figure of 12 for relative magnitude could be optimistic. Signal to noise ratio can be eaten into by 'interference' / cross talk.

Signal processing can be a big help but that will take us away from a linear situation and could suppress nearby faint stars further.
 
  • #24
sophiecentaur said:
Where would you expect to find "linear data"? As you point out, using half power radius is a useful tool because half power allows guiding with dim stars but that's not the same as claiming that spikes (which actually go on for ever, theoretically) will all be visible for all stars. Your data near the peak can be considered to be linear but it eventually ends up being non-linear

The bit depth of the cameras appears to be 16, which corresponds to a maximum relative magnitude of about 12, at which point the artefacts will disappear. That implies a star that is exposed at the limit will only display artefacts down to a magnitude of 12, relative to the correctly exposed star.
'Seeing' is excellent at a Lagrange point but scatter (artefacts, if you like) from nearby bright stars can limit how far down you can go and I suspect that that figure of 12 for relative magnitude could be optimistic. Signal to noise ratio can be eaten into by 'interference' / cross talk.

Signal processing can be a big help but that will take us away from a linear situation and could suppress nearby faint stars further.
I believe the JWST is operating at 32 bit monochrome. At least that’s what the FITS files are. When you first open the file, the only part that’s non-linear is the “black spots” at the centers of the brightest stars. Whenever a pixel becomes saturated it turns black, counterintuitively. When you first open any of the files, all or almost all of them appear totally black on the computer monitor. In order to see any of the stars, galaxies and nebulae, you have to “histogram stretch” the data. That’s the point where the data becomes non-linear and processing comes into play. If you don't stretch the data you won’t see anything. That’s because JPG files and computer monitors only display 256 levels of brightness (8-bit per color channel) while 32 bit files record 4,294,967,295 levels of brightness, so when you stretch the data you’re essentially choosing which of the 4,294,967,295 colors to “throw away” because you can only keep 256 (and then you’re re-mapping those remaining 256 brightnesses to the ones your monitor can show). So basically the JWST records the data linearly but if you keep it that way you can’t actually “see” anything.
 
Last edited:
  • Like
Likes collinsmark
  • #25
collinsmark said:
The same is true with diffraction artifacts. Brighter stars will produce brighter diffraction artifacts. But if you scale the brightness of the diffraction artifacts linearly with the brightness of the imaged star, nothing changes.
Yes they have the same shape. On the image their dimensions differ. If you insist on scaling things of the same shape then they can be made to have the same size. Regards Andrew
 
  • #26
andrew s 1905 said:
On the image their dimensions differ.
How will you measure the difference in length when each star has diffraction effects all the way to the edge of the image frame? The diffraction of point sources occurs in the atmosphere & optical tube assembly, not in the star or the sensor.
 
  • #27
Devin-M said:
I believe the JWST is operating at 32 bit monochrome
That's what you get when you stack many different images. Stacking allows faint parts of an image to appear way down in the brightness distribution.
You make a valid point by bringing in the 32 bit issue and the limits of our displays. That could account for disappearing low level detail, after all, the cameras are wide field and we often only see a small portion of a full image.
 
  • #28
sophiecentaur said:
Where would you expect to find "linear data"?

All astrophotos start as linear. The analog-to-digital units (ADU) value of a pixel is a linear function of the number of photons striking that pixel, up to the point of saturation at the high end, and down to the quantization and noise at the low end. It means when you plot ADU vs. photon count, there's this big long line in the middle that's a straight line, i.e., linear.

This linear relationship will be sacrificed in the processing, when the nonlinear "stretch" is implemented, not to mention other nonlinear processing such as curves and contrast adjustments. Pretty much every deep sky astrophoto of a JWST image published to the media has a nonlinear stretch applied (in addition to other nonlinear operations), otherwise it wouldn't look like much of anything.

The characteristics of this stretch and curves are chosen subjectively, usually decided by the artistic whims of the person processing the data. And it is the result of these artistic choices that cause dimmer stars to appear point like and brighter stars to have spikes. Had the person made different artistic choices when processing the data, it would change how many stars have artifacts and how many do not.

But it all starts as linear (before the artistic, nonlinear stretch). Even at the dimmer side of the curve, things can still be treated as having a linear relationship with the addition of noise added on top. I.e., the relationship between actual signal and ADU is still linear even in the dimmer regions, in that sense.

sophiecentaur said:
As you point out, using half power radius is a useful tool because half power allows guiding with dim stars but that's not the same as claiming that spikes (which actually go on for ever, theoretically) will all be visible for all stars. Your data near the peak can be considered to be linear but it eventually ends up being non-linear
No. Even if the way that is worded isn't incorrect (which it might be), it's at least misleading.

The characteristics of the diffraction patterns, including the spikes, are not a nonlinear function of intensity. The relationship is linear, all the way down. (And even if we take sensor imperfections into considerations, it's still mostly linear, all the way down.)

Yes, of course there are limitations of the sensor. But that's not the point. We're talking about characteristics of the diffraction pattern (including the spikes) before they even strike the sensor.

The diffraction pattern itself is not a function of intensity (including the spikes). The shape and size of the pattern are independent of the number of photons reaching the detector. The only thing that changes as you increase the number of photons are the number of photons. That's it.

-----------

Let me explain it another way.

I'm sure you are familiar with the double slit experiment. That's the experiment where when light passes through a pair a slits, and an interference pattern is formed on the backdrop.

As I'm sure you know, the interference pattern is the same regardless of the photon rate. A moderately bright light source will produce the same pattern as the pattern produced if the photons were sent one at a time. Sending more photons only has the effect of more photons; the pattern itself doesn't change, the only difference is the number of photons.

JWST's optics (or the optics of any telescope, for that matter) work the same way. But instead of just having a pair of slits, you have mirrors, struts, hexagonal patterns in the primary mirror, and other obstructions. Taken together, when photons from a light source pass through these optics they too will form an interference pattern when striking the sensor.

And that interference pattern is the pattern we see, with the comparatively bright central spot, the somewhat dimmer hexagonal pattern surrounding it, and even the "diffraction spikes." These are all part of the same pattern that every last photon is subject to as it passes through JWST's optics.

One difference to the double slit experiment that should be noted is that in the case of the JWST, photons can come from slightly different directions (not all light sources are along the central axis), which shifts the corresponding interference pattern to differernt parts of the sensor, depending on the direction of the photon source.

But identical to the double slit experiment, photons from a given light source have the same interference pattern regardless of the photon rate. It doesn't matter if many photons pass through in quick succession or pass through slowly, one at a time. The pattern on the sensor is the same.

Claiming that characteristics of the diffraction pattern, including spikes, other than mere photon count, are a function of the star's brightness is simply false. It's akin to saying the interference pattern of the double slit experiment is a function of the photon rate. It's not. The claim is false.

sophiecentaur said:
The bit depth of the cameras appears to be 16, which corresponds to a maximum relative magnitude of about 12, at which point the artefacts will disappear. That implies a star that is exposed at the limit will only display artefacts down to a magnitude of 12, relative to the correctly exposed star.
'Seeing' is excellent at a Lagrange point but scatter (artefacts, if you like) from nearby bright stars can limit how far down you can go and I suspect that that figure of 12 for relative magnitude could be optimistic. Signal to noise ratio can be eaten into by 'interference' / cross talk.

Signal processing can be a big help but that will take us away from a linear situation and could suppress nearby faint stars further.

You lost me here.
  • Seeing is caused by the atmosphere, and JWST is not subject to seeing since it is outside the atmosphere.
  • Scattering? Are you suggesting light from far away stars is bouncing off nearby stars? I'm not sure what you mean here.
  • Interference and crosstalk? You really lost me on that. In conventional communication systems interference and crosstalk can be caused by nonlinear components in the system, but for JWST, the signal is completely linear all the way to the sensor, and then still mostly linear (saturation being an exception) up until the point of the artistic "strech" -- the stretch that's performed before the image is posted to the media. But before that, everything is linear.
 
Last edited:
  • Like
  • Informative
Likes pinball1970 and Devin-M
  • #29
collinsmark said:
All astrophotos start as linear.
I'm afraid there are too many points in your post to be addressed.

OK, I already accepted that the sensor itself is linear over a wide range and that stacking will allow a substantial increase in effective bit depth by averaging out random noise. I also know that the maths of diffraction goes on and on, as far down as you like. Real life is not like that. We always run out of range because noise and interference are present.

Do you have a reference about 32bit ADCs on the JWST? Wherever I have found a bit depth of the sensor arrays mentioned, it's been 16bits. Stacking can be achieved in many ways and they are all based on non linear processing to reject spurious data so I have to assume that, in fact, the linearity / purity of the images is restricted to 16 bits. JWST doesn't need defending and 16 bit data has been quite adequate as a source of fantastic and revealing images.
 
  • #30
sophiecentaur said:
Do you have a reference about 32bit ADCs on the JWST? Wherever I have found a bit depth of the sensor arrays mentioned, it's been 16bits. Stacking can be achieved in many ways and they are all based on non linear processing to reject spurious data so I have to assume that, in fact, the linearity / purity of the images is restricted to 16 bits. JWST doesn't need defending and 16 bit data has been quite adequate as a source of fantastic and revealing images.

As far as I can tell from the metadata of the JWST raw data most if not all of the observations are single exposures, not multiple stacked exposures. It does take multiple monochrome exposures each through different filters to produce a color image but that’s not the same as “stacking” to reduce noise. Since the downloadable images are calibrated, however, to remove noise it would make sense for them to perform those calibrations in as high a bit depth as possible, ie 32 bit, even if the sensors are only operating at 16 bit.

Accumulated analog charges are converted to 16-bit digital signals with up to 65,535 data counts (analog-to-digital units, or ADU) in each pixel. The gain values are roughly 2 electrons per data count (2 e–/ADU) on average, varying among the detectors as well as among pixels within each detector (see the Detector Performance article).

https://jwst-docs.stsci.edu/jwst-ne...rcam-instrumentation/nircam-detector-overview

Well capacity is equal to or greater than 65000…

https://www.teledyne-si.com/products/Documents/TSI-0855 H2RG Brochure-25Feb2022.pdf
 
Last edited:
  • #31
Isn’t the maximum diffraction angle of a photon from a dim star the same as the maximum diffraction angle of a photon from a bright star for the reasons stated in @collinsmark previous post?

collinsmark said:
But identical to the double slit experiment, photons from a given light source have the same interference pattern regardless of the photon rate. It doesn't matter if many photons pass through in quick succession or pass through slowly, one at a time. The pattern on the sensor is the same.
 
  • #32
Firstly an apology the FWHM is indeed not a function of star magnitude. I was mistaken it is a measure of "shape".

Yes the theoretical diffraction patter of a point source convoluted with the telescopes instrumental profile is infinite in extent but this is misleading.

The image is also the result of sampling by the detector and boxed both by its physical size and the length of the exposure. At some point as the intensity of the point source drops no photons will be captured in fainter parts of the point source image even with a noiseless detector. The image will either be finite and within the area of the detector and can be measurable using standard methods or be undefined if it extends to the detector edge.

This is true of both the core of the star image and the diffraction spikes. You can measure from the centroid to a point where the intensity of the object of interest falls to zero or often some defined fraction above the noise floor. This is routinely done in setting photometric apertures.

Regards Andrew
 
  • #33
Whatever the captured bit depth it would be normal to do the calibration in 32 bit arithmetic and save the result in 32 bit files.
Regards Andrew
 
  • Like
Likes sophiecentaur
  • #35
andrew s 1905 said:
Yes the theoretical diffraction pattern of a point source convoluted with the telescopes instrumental profile is infinite in extent but this is misleading.
It's very misleading because, in a real image there is not a point source and also there are a number of other sources in the vicinity of the low level parts of a diffraction spike. this constitutes a 'floor' which can be significantly above the least significant step in the ADC.
 
  • Like
Likes andrew s 1905
  • #36
sophiecentaur said:
It's very misleading because, in a real image there is not a point source and also there are a number of other sources in the vicinity of the low level parts of a diffraction spike. this constitutes a 'floor' which can be significantly above the least significant step in the ADC.

If a pixel lies along the path of diffraction spike, and is dim enough such that expected value for that pixel is less than 1 analog-to-digital converter units (ADU) after compensating for noise (dark frame subtraction), there is still a finite probability that the pixel will register 1 or more ADU. For example, for a given exposure time, in a dim section of a diffraction spike, a pixel might only have a 50% chance of registering a single ADU or more. On even dimmer section of the spike, the probability drops to 25%. This is due to probabilistic nature of shot noise, which is inherently part and parcel of the signal. In other words, if you look along a diffraction spike in the vicinity of 50% probability, half of the pixels in that region will register at least 1 ADU above the noise (i.e., ~1 ADU after dark frame subtraction).

The point is that even if the expected value of a pixel that lies in a diffraction spike is less than the least significant step in the ADC (i.e., 1 ADU), it does not guarantee that the pixel will not register a signal. The signal still has an effect on the pixel registration in a probabilistic manner.

[Edit: and if a particular pixel is along the intersection of diffraction spikes/artifacts, say from two or three or more different stars, the probabilistic contributions add together linearly, even if the expectation value of anyone of the spikes/artifacts, or all of them, is less than 1 ADU in that region.]

And you don't need a point source for this. As I've essentially stated in post #404 the interference pattern applies to all photons that pass through the optics and reach the detector, whether those photons originate from stars, nebulosity, accretion disks, anything. Any photon that passes through the telescope's optics is subject to an interference pattern before that photon reaches the sensor, if it reaches the sensor at all. It matters not what the source of the photons are for this. All photons from any distant source that reach the sensor are subject an interference pattern before reaching the sensor.

There's nothing misleading about this. Diffraction patterns and interference patterns and the probabilistic nature of quantum mechanics are not "misleading." It's just how the universe works.
 
Last edited:
  • #37
I decided to put a piece of window screen in front of my objective lens for a diffraction test…

687C19BA-8DBB-442E-86FF-237B85AC1507.jpeg

CD362C7F-83B2-45B4-82B4-54AAB0EB2E28.jpeg

3AAED420-0843-48CC-B364-29A3DCE44526.jpeg

I took a single 5 minute exposure of Polaris at 600mm focal f/9, 100iso with the window screen in front of the lens…

D58A53FD-47F0-46AE-B2A1-16434FD63C18.jpeg


When I adjust the RAW conversion settings of half the 14 bit image (with the identical exposure / image data), the brighter Polaris diffraction pattern appears the same shape and size as the dimmer star in the upper right…
60A8C309-CA23-46A9-9F9A-051E4C3F1F85.jpeg
 
  • Like
Likes Imager and collinsmark
  • #38
collinsmark said:
There's nothing misleading about this. Diffraction patterns and interference patterns and the probabilistic nature of quantum mechanics are not "misleading." It's just how the universe works.
Yes true but there comes a point where even with photon statistics etc. that the probability of detecting a photon is FAPP zero within the length of the exposure. No amount of signal processing can pull the signal from the noise.

If this were not the case there would be no lower limit to the faintness of stars we could detect and no need for larger telescopes.

Regards Andrew
 
  • #39
Devin-M said:
When I adjust the RAW conversion settings of half the 14 bit image (with the identical exposure / image data), the brighter Polaris diffraction pattern appears the same shape and size as the dimmer star in the upper right…
Not sure what you are trying to show other than by signal processing you can manipulate how an image looks.

Can you tell me how you measured the size of the images to be the same as your earlier claim was they are not finite but extend off the image?

Regards Andrew
 
  • #40
sophiecentaur said:
I'm afraid there are too many points in your post to be addressed.

OK, I already accepted that the sensor itself is linear over a wide range and that stacking will allow a substantial increase in effective bit depth by averaging out random noise. I also know that the maths of diffraction goes on and on, as far down as you like. Real life is not like that. We always run out of range because noise and interference are present.

Do you have a reference about 32bit ADCs on the JWST? Wherever I have found a bit depth of the sensor arrays mentioned, it's been 16bits. Stacking can be achieved in many ways and they are all based on non linear processing to reject spurious data so I have to assume that, in fact, the linearity / purity of the images is restricted to 16 bits. JWST doesn't need defending and 16 bit data has been quite adequate as a source of fantastic and revealing images.

I'm pretty sure the bit depth of the sensor itself is 16 bit, since the full-well value of any of the sensors in NIRCam's sensor array is less than 2^{16} = 65536. (See https://jwst-docs.stsci.edu/jwst-ne...detector-overview/nircam-detector-performance)

But each pixel value is stored in 32 bit floating point format before or during the steps where calibration is applied and subframes are stacked. This allows for sub-ADU resolution of each pixel. As a matter of fact, as described below, it's possible to achieve resolutions not just below that of an ADU, but even a single photon, if sufficient stacking is performed.

One obvious reason for stacking is to identify cosmic rays. It's not difficult to identify because a subframe pixel affected by a cosmic ray will be a statistical outlier compared to the corresponding pixels in the other subframes.

And, as you mentioned, you can increase the signal to noise ratio above that of any single subframe by stacking multiple subframes. One can use the central limit theorem to show that (given a few assumptions about the noise being uncorrelated) the signal to noise ratio increases by a factor of \sqrt{N} over a single subframe, where N is the number of subframes stacked.

Stacking also increases the bit-depth in another way due to the probabilistic nature of photon arrival. Even if some sublte detail in a target is less than 1 ADU, it still affects the pixels in a probabilistic manner. For example if a star's diffraction spike over a particular pixel is only 1/5 of an ADU, you would expect 2 out of 10 subframes to have an additional ADU above the background for that pixel. And if you stack 10 subframes (averaging them), you can get that extra 0.2 ADU detail in the result. If you stack enough subframes you can gain resolutions better than even a single photon.

I don't know how much stacking is typically done in JWST images, but there's definitely some stacking done. There are gaps between the sensors within the sensor array, so there needs to be at least some stacking overlap for that at least. (See https://jwst-docs.stsci.edu/jwst-near-infrared-camera)

NIRCam modules field of view

NIRCam+modules+FOV.png


For what it's worth, here's an image from the recent Pillars Of Creation redo, specifically showing the stacking overlap of the individual sensor cores. I downloaded this particular image from Mast, then I used PixInsight to apply a quick-and-dirty stretch to it (otherwise it would look nearly all black), resized it for PF, and saved it as a JPEG. This image was acquired using the F090W filter.
jw02739_o001_t001_nircam_clear_f090w_i2d_VAR_POISSON_clone.jpg
 
Last edited:
  • #41
andrew s 1905 said:
Yes true but there comes a point where even with photon statistics etc. that the probability of detecting a photon is FAPP zero within the length of the exposure. No amount of signal processing can pull the signal from the noise.

Of course there are limitations with the sensor. I've never argued against that. What I object to are incorrect claims such as
  • diffraction spikes are caused by nonlinearities (false)
  • diffraction patterns themselves are inherently nonlinear (false)
  • dim stars never produce diffraction spikes; diffraction spikes are only caused by bright stars (false)
  • diffraction patterns are a nonlinear function of the star's brightness (false)
  • for a given exposure it's impossible to ever gain more detail below 1 ADU (false. You can gain better resolution than 1 ADU by stacking multiple subframes and exploit the probabilistic nature of photon arrival).
I've never claimed that for a given exposure you can gain more detail than a single ADU by "signal processing." Of course not. But you can take that single exposure and stack it together with many other similar, single exposures, and get that detail back. Or, if saturation isn't an issue, just take longer exposures.

If what you said is true ("the probability of detecting a photon is FAPP zero within the length of the exposure") then the act of stacking multiple sub-exposures of the same length would also have "FAPP zero" probability of detecting a photon. But it's not. You can get that detail* back by stacking. The point being that the information of that subtle diffraction spike of that dim star is still there, albeit in a probabilistic manner (i.e., it takes more than one exposure, but it can be gotten).

*[Edit: here "detail" refers to small variations in intensity, not detail in terms of angular resolution.]

Applying all that to this discussion: Diffraction patterns/interference patterns are not the result of the exposure time or the result of sensor limitations. Diffraction patterns/interference patterns are ultimately a function of the telescope's optics.

andrew s 1905 said:
If this were not the case there would be no lower limit to the faintness of stars we could detect and no need for larger telescopes.

Regards Andrew

There are several reasons for larger telescopes. Two in particular:
  1. For the same angular resolution (or for a given focal length), a bigger telescope gathers more light and allow the image to be acquired in less time, all else being roughly equal.
  2. And more importantly, the image produced by a given telescope is essentially a convolution of the diffraction pattern/interference pattern which we are discussing here. It's not possible to achieve more angular detail in the image than the angular detail in the diffraction pattern/interference pattern. Bigger telescopes have smaller/more detailed diffraction/interference patterns. So if you want more angular detail in the resulting image, you need a bigger scope.
 
Last edited:
  • #42
collinsmark said:
I'm pretty sure the bit depth of the sensor itself is 16 bit,
So we agree on that. The only way to increase the effective number of bits is by using multiple images and you say there's no stacking. So the linearity range cannot go below one bit 1:1/(216). That's not a big relative magnitude and can account for the 'smaller' / shorter spikes for the dimmer stars. I'm not sure why you took exception to this.
 
  • #43
collinsmark said:
  • diffraction spikes are caused by nonlinearities (false)
  • diffraction patterns themselves are inherently nonlinear (false)
  • dim stars never produce diffraction spikes; diffraction spikes are only caused by bright stars (false)
  • diffraction patterns are a nonlinear function of the star's brightness (false)
  • for a given exposure it's impossible to ever gain more detail below 1 ADU (false. You can gain better resolution than 1 ADU by stacking multiple subframes and exploit the probabilistic nature of photon arrival).
Where did you get this list from? You must have mis-read a lot of what I wrote or, at least, have confused the concept of "recorded image of a pattern" with the pattern itself. The linearity failure at low levels can destroy recorded spikes and 16 bits is where linearity fails. Some spikes are never there in 16 bit images.
 
  • #44
sophiecentaur said:
So we agree on that. The only way to increase the effective number of bits is by using multiple images and you say there's no stacking. So the linearity range cannot go below one bit 1:1/(216). That's not a big relative magnitude and can account for the 'smaller' / shorter spikes for the dimmer stars. I'm not sure why you took exception to this.
No, there is stacking. There's always at least some stacking, even with JWST's pristine sensors. I'm just not sure how much is typically done with JWST.

I don't take exception to the acknowledgment that there are practical limitations. Of course there are limitations such as finite amount of integration time for practical reasons. Of course.

What I take objection to is claims implying that it is impossible to detect small details such as the diffraction spikes of dimmer stars, even in principle. It's not impossible; the physics that cause the diffraction spikes of brigher stars is also equally present for dimmer stars. It just may take more integration time (either longer exposures or stacking of shorter exposures) to bring those spikes above the floor.
 
  • #45
sophiecentaur said:
Where did you get this list from? You must have mis-read a lot of what I wrote or, [...]
I wasn't replying to you in-particular on that one. :smile:
 
  • #46
collinsmark said:
I wasn't replying to you in-particular on that one. :smile:
Well I never made any of those claims so who were you replying to?
Regards Andrew
 
  • #47
collinsmark said:
what you said is true ("the probability of detecting a photon is FAPP zero within the length of the exposure") then the act of stacking multiple sub-exposures of the same length would also have "FAPP zero" probability of detecting a photon. But it's not. You can get that detail back by stacking. The point being that the information of that subtle diffraction spike of that dim star is still there, albeit in a prob
This is not true. Whatever the exposure time (single or multiple images) there will be an intensity where FAPP there will be zero photons detected. Yes by increasing the exposue you can record fainter details but even here there is a limit due to non zero sky brightness and othe noise sources.

Regards Andrew
 
  • #48
andrew s 1905 said:
This is not true. Whatever the exposure time (single or multiple images) there will be an intensity where FAPP there will be zero photons detected. Yes by increasing the exposue you can record fainter details but even here there is a limit due to non zero sky brightness and othe noise sources.

Regards Andrew
If there are no photons at all, then of course there will be no photons detected.

But if there's even a dim source, The Central Limit Theorem disagrees with you.

Just like rolling a die (as in "dice") that has an ever so slightly greater chance of landing on a particular number compared to any other number, the discrepancy can be determined with enough rolls. Even if the imperfection is smaller, it can be determined with a greater number of rolls.

In the case of a dim object viewed from a telescope, if the source's photon flux is greater than its surrounding background, and the photon's wavelengths are within the bandwidth of the receiver (within the filter's/sensor's bandwidth), and if the statistics of the system are stationary (i.e., we're not talking about a dynamical system such as a one-off flash, or something changing its behavior in an aperiodic fashion), then the photons can be detected with sufficient integration.

For a given exposure time of subframes, the pixel value of interest can be treated as a random variable with a mean (i.e, "average" value) and standard deviation. The standard deviation of the pixel value is the result of all the noise sources combined.

We can estimate the true mean by summing together the pixel values of multiple subframes, and then dividing by N, the number of subrames in the ensemble. (in other words, take the average value of the pixel).

What does that do to the standard deviation, you might ask? That is, the standard deviation caused by the combination of all noise sources after summing multiple subframes together?

The Central Limit Theorem shows that standard deviation of the averaged ensemble tends toward zero as N increases. Specifically by a factor of \frac{1}{\sqrt{N}}.

Similarly, if instead of stacking, you wanted to take a longer exposure (and are not at risk of saturation), with exposure time T, the time averaged noise (per unit signal) decreases by a factor of \frac{1}{\sqrt{T}} for all noise sources except the read noise, and then \frac{\mathrm{read \ noise}}{T} is added on as final step. [Edit: I'm admittedly kind of sloppy here. The units of time here are not seconds, but rather the fraction of some fixed time interval such as that used for individual subframes described above.]

The implication here is that by increasing total integration time, the estimated mean approaches the true mean with arbitrarily close precision, as total integration time increases.

Of course there may be practical limitations in any real world system. Of course. But saying that it's not possible, even in principle, is incorrect.
 
Last edited:
  • #49
The pertinent equations suggest a different color point source will have a different diffraction shape/size, not a different brightness.
 
  • #50
Devin-M said:
The pertinent equations suggest a different color point source will have a different diffraction shape/size, not a different brightness.
Yes, the diffraction pattern is wavelength dependent. That's true.
 

Similar threads

Replies
25
Views
2K
Replies
1
Views
2K
Replies
4
Views
2K
Replies
28
Views
4K
Replies
1
Views
2K
Replies
2
Views
1K
Replies
9
Views
3K
Replies
5
Views
4K
Back
Top