Diffraction Effects and Artifacts in Telescopes like the JWST

In summary: I believe we are in agreement on this. There was a question about diffraction artifacts, and an example of a star in the image showing those artifacts. That hexagonal artifacts (was well as any diffraction spikes) have a different cause than the actual dust rings. That's all I meant to say.
  • #1
collinsmark
Homework Helper
Gold Member
3,371
2,574
[Mentor Note -- Discussion spun off from the main JWST thread]

pinball1970 said:
What is this top right? It looks very symmetrical in shape and colour, just an optical effect from the telescope like diffraction spikes?

Yes, the apparent hexagonal shape of that star is due to the diffraction within the telescope optics (optical filter characteristics may also have played a role).

For clarity, we're talking about this star:
1666253518991-png.png


sophiecentaur said:
But why aren’t all (bright) star images like it?

They are (images from the same telescope optics, that is). You can even see it in the main subject star in the original image:
1666253408455-png.png


Ignore the dust rings for a moment, and concentrate on the central star. Its apparent shape is a hexagon, just like the other star (same size even). It's just that it's so much brighter that all the color detail is saturated (blown highlights), so it just looks like a white hexagon.

You might not see the hexagon shape in the other stars because they are relatively dimmer. With the dimmer stars, the outer 6 hexagonal components more easily blend into the background. But they're technically still there. You might be able to spot more if you look closely.

Much of what gets saturated and what doesn't, is not only dependent on the telescope's optics hardware (which may include the optical filters used) and exposure characteristics, but also on the image processing details, such as how the histogram was stretched before the final image shown.
 
Last edited by a moderator:
  • Informative
  • Like
Likes mfb and pinball1970
Astronomy news on Phys.org
  • #2
collinsmark said:
For clarity, we're talking about this star:
That's a help. I was talking about the 'dust rings'.
collinsmark said:
Ignore the dust rings for a moment,
But aren't they the most remarkable feature?
collinsmark said:
You might not see the hexagon shape in the other stars because they are relatively dimmer.
Problem is, there are thousands of JWST star images with a range of exposure values.

There are a number of odd shaped objects out there - take this square nebula, for example. That's not considered to be an optical artefact. Diffraction effects are very repeatable; all that's needed is for the brightness of the object to be appropriate - just as the diffraction spikes in images from Newtonian scopes only seem to be present on the brighter stars. I'm not saying that the image cannot be due to diffraction - just that, if it were, there would be many many other examples of the same thing.
 
  • #3
sophiecentaur said:
But aren't [the dust rings] the most remarkable feature?

Yes, of course. The dust rings in the image are from real, physical, dust rings in space.

The hexagonal appearance of the (bright) stars are diffraction artifacts caused by JWST's internal optics.

sophiecentaur said:
Problem is, there are thousands of JWST star images with a range of exposure values.

There are a number of odd shaped objects out there - take this square nebula, for example. That's not considered to be an optical artefact. Diffraction effects are very repeatable; all that's needed is for the brightness of the object to be appropriate - just as the diffraction spikes in images from Newtonian scopes only seem to be present on the brighter stars. I'm not saying that the image cannot be due to diffraction - just that, if it were, there would be many many other examples of the same thing.

Yes, I believe we are in agreement on this. There was a question about diffraction artifacts, and an example of a star in the image showing those artifacts. That hexagonal artifacts (was well as any diffraction spikes) have a different cause than the actual dust rings. That's all I meant to say.

--------------
Edit: On a slightly different note, I should point out that diffraction artifacts and diffraction spikes are actually present on all the stars and all the objects on every JWST image; it's just that they are usually too dim to show up in the final processed image on all but the brightest stars.

But if you were to take the original raw data and stretch it to its limits, you would see these artifacts make up everything in the image (well, at least until the dimmer artifacts blur into the noise floor of the sensor).

It's almost like painting a picture with a brush that is hexagonal shaped, has a much brighter spot in the very center, and has dim diffraction spikes out on the sides. If you wanted to paint a dim star, you barely touch the brush against the black canvas, and only the center dot shows up much. But for really bright stars. the center dot as well as the hexagonal shape get blown out white, and even the spikes might begin to show through. But even things in-between, such as the nebulosity are all painted with the same brush.
 
Last edited:
  • #4
collinsmark said:
That hexagonal artifacts (was well as any diffraction spikes) have a different cause than the actual dust rings.
A different cause, yes but that particular image has concentric red and green haloes. Do we see those on every start image (in addition to the six spikes from the three struts)? Anything that the telescope optics introduces would be common to many images but are those hexagonal halos common on more faint star images? This is my problem with your suggested explanation - I'd say a third effect has to be present when the sensor is operating way down its output range. (Ref that 'square nebula'
image. which is not explained by telescope diffraction.)
1666351990333.png


The bright spikes are only seen for bright stars that are saturating the image sensor at their centre; the spikes are very much fainter than that - levels of the same order as fainter stars whose spikes we do not see because they are too faint.
 
  • Like
Likes pinball1970
  • #5
sophiecentaur said:
A different cause, yes but that particular image has concentric red and green haloes. Do we see those on every start image (in addition to the six spikes from the three struts)? Anything that the telescope optics introduces would be common to many images but are those hexagonal halos common on more faint star images? This is my problem with your suggested explanation - I'd say a third effect has to be present when the sensor is operating way down its output range. (Ref that 'square nebula'
image. which is not explained by telescope diffraction.)View attachment 315837

The bright spikes are only seen for bright stars that are saturating the image sensor at their centre; the spikes are very much fainter than that - levels of the same order as fainter stars whose spikes we do not see because they are too faint.

Yes, the artifacts are caused by the characteristics of the telescope's optics. The wavelength of light is also a factor, so the diffraction artifacts will manifest differently for different colors.

And the way the data is processed also impacts how easily the diffraction artifacts are noticeable. So different JWST images might show different artifacts, even though they share the same optics, just because they were processed differently.

Here's another image based on the same data (I believe, if I'm not mistaken), just processed differently, with a different stretch curve (and rotated and cropped differently).
p9krHxVwLDkbbJU9bb7XgK.jpg
 
  • #6
Let me elaborate on one more thing, just for clarity.

Refer to the same image in my last post (shown again here):
p9krhxvwldkbbju9bb7xgk-jpg.jpg


For the moment, ignore the dust rings (shown in pink/orange in the image). Those are real, huge rings out in space, gigantic in size. I'm not talking about those in this post. I'm just discussing the diffraction artifacts for the rest of this post.

In the image, notice the obvious diffraction spikes of the the central star. Then there are a couple of other stars -- one to the left, and another to the lower right -- that show some diffraction artifacts, and the diffraction spikes are just barely visible. The rest of the stars don't seem to have any noticeable artifacts or spikes.

Based on this, one might be inclined to think that brighter stars produce bigger artifacts and bigger spikes in the telescope's optics. This is false. That's not how it works.

In truth, brighter stars produce brighter artifacts and brighter spikes in the telescope's optics. Brighter, yes, but not "bigger."

Yes, those two stars -- one to the left and the other to the lower right -- have the same sized artifacts and spikes as the central star. They're just so dim that you can't make them out, except for the brightest part of the artifacts.

The central star is bright enough such that the hexagonal artifact is so bright (due to the way the image was processed) it blows out the whole central region, obfuscating any detail. Again, don't try looking for a bigger hexagonal pattern from the central star. It's not bigger. The artifact is the same size as the others. It's just brighter.

And the rest of the stars? They have the artifacts and spikes too, but the stars are dim enough such that the artifacts and spikes are so dim they are not noticeable in the image.
 
Last edited:
  • Like
  • Informative
Likes mfb, sophiecentaur, pinball1970 and 1 other person
  • #7
collinsmark said:
In the image, notice the obvious diffraction spikes of the the central star. Then there are a couple of other stars -- one to the left, and another to the lower right -- that show some diffraction artifacts, and the diffraction spikes are just barely visible. The rest of the stars don't seem to have any noticeable artifacts or spikes.

Based on this, one might be inclined to think that brighter stars produce bigger artifacts and bigger spikes in the telescope's optics. This is false. That's not how it works.

In truth, brighter stars produce brighter artifacts and brighter spikes in the telescope's optics. Brighter, yes, but not "bigger."
I think this is using "bigger" in a very non standard way. The image of an aberration free image of point sources will all have the same shape but its apparent observable size will depend on the intensity of the source. If you measure the size of the core of point sources image using the standard FWHM (Full Width at Half Maximum) you will get different results for different magnitude stars. Similarly if you measure the size of the diffraction spikes in the image using any normal technique the sizes will differ.

In your sense all images of objects are infinite in size as the diffraction artifacts are theoretically without bound !
Regards Andrew
 
  • Like
Likes sophiecentaur
  • #8
andrew s 1905 said:
The image of an aberration free image of point sources will all have the same shape but its apparent observable size will depend on the intensity of the source. If you measure the size of the core of point sources image using the standard FWHM (Full Width at Half Maximum) you will get different results for different magnitude stars. Similarly if you measure the size of the diffraction spikes in the image using any normal technique the sizes will differ.
It depends how you processed the image.

The two images below are the same image both processed differently by myself from raw JWST sensor data:
a_00001_0.jpg

a_00001_0-2.jpg


You can see that in the second image, the diffraction spikes of the 2nd brightest star continues all the way to the top and bottom of the image frame.

Using the photoshop color sampler, I inspected the first image and the red circled area had an RGB value of 0,0,0.
sample.jpg


a_00001_0-2 copy.jpg
 
  • Like
Likes mfb and collinsmark
  • #9
Devin-M said:
You can see that in the second image, the diffraction spikes of the 2nd brightest star continue all the way to the top and bottom of the image frame.
Indeed they do they have the same shape but even to a casual observer they have different widths are you really saying they are of the same dimensions? Cut and paste one in the other are they identical?

Also as I pointed out the FWHM of the stars is different. It all depends if the spikes is above the noise floor or not. Spikes from many stars in your stretched image don't reach the bottom.

Regards Andrew
 
  • #10
andrew s 1905 said:
Indeed they do they have the same shape but even to a casual observer they have different widths are you really saying they are of the same dimensions? Cut and paste one in the other are they identical?

Also as I pointed out the FWHM of the stars is different. It all depends if the spikes is above the noise floor or not. Spikes from many stars in your stretched image don't reach the bottom.

Regards Andrew
The apparently brighter star has smaller diffraction spikes than the apparently dimmer one when I process the stars differently...

dimmer.jpg

brighter.jpg
 
  • #11
The star on the right is apparently brighter, but has shorter diffraction spikes than the apparently dimmer star on the left, because each half of the image was processed differently.
comparison.jpg

a_00001_0.jpg
 
  • #12
andrew s 1905 said:
The image of an aberration free image of point sources will all have the same shape but its apparent observable size will depend on the intensity of the source.
Yes, in a linear world.
This is down to inherent non-linearity and other non linearities from image processing. No surprises. Astrophotographers are concerned with displaying whatever feature they need in order to present their results.
 
  • Like
Likes collinsmark
  • #13
Devin-M said:
The star on the right is apparently brighter, but has shorter diffraction spikes than the apparently dimmer
It's not about how you process the image. If you restrict yourself to linear processing and don't saturate the image then a properly formulated measure of size will give the same result e,g. FWHM is an example. When discussing how big something is you have to specify how you will measure it.
Regards Andrew
 
  • #14
andrew s 1905 said:
It's not about how you process the image. If you restrict yourself to linear processing and don't saturate the image then a properly formulated measure of size will give the same result e,g. FWHM is an example. When discussing how big something is you have to specify how you will measure it.
Regards Andrew
Not 'wrong' but the fact is that most images you see are subject to non-linearity / available contrast ratio. As with lithographic imaging, the brightness of an image will translate into size. In printing, this allows half tone images to be printed using a 'screen' and regular (on/off) ink.

To get equal sized spikey star images, you need to have enough quantising levels to cope with the brightest bits of the brightest star and the dimmest bits of spikes of the dimmest star. Hence we see just a few spikey stars and a lot of 'round ones' on most space images. It's a real world.
 
  • #15
sophiecentaur said:
Not 'wrong' but the fact is that most images you see are subject to non-linearity / available contrast ratio. As with lithographic imaging, the brightness of an image will translate into size. In printing, this allows half tone images to be printed using a 'screen' and regular (on/off) ink.

To get equal sized spikey star images, you need to have enough quantising levels to cope with the brightest bits of the brightest star and the dimmest bits of spikes of the dimmest star. Hence we see just a few spikey stars and a lot of 'round ones' on most space images. It's a real world.
I don't disagree but I was responding a post where it was claimed the diffraction pattern, specifically the spikes, were all the same "bigness". With image processing you can achieve almost anything. I was focused on scientific measurements.
Regards Andrew
 
  • #16
A measurement of FWHM doesn’t give you the length (pixels or arc seconds) of the entire diffraction spike.
 
Last edited:
  • #17
collinsmark said:
Based on this, one might be inclined to think that brighter stars produce bigger artifacts and bigger spikes in the telescope's optics. This is false. That's not how it works.

In truth, brighter stars produce brighter artifacts and brighter spikes in the telescope's optics. Brighter, yes, but not "bigger."
While in theory all stars or other "point sources" of a given wavelength and seen through a given aperture have the same Airy disk radius characterized by the above equation (and the same diffraction pattern size), differing only in intensity, the appearance is that fainter sources appear as smaller disks, and brighter sources appear as larger disks.[5]

Because any detector (eye, film, digital) used to observe the diffraction pattern can have an intensity threshold for detection, the full diffraction pattern may not be apparent.
https://en.m.wikipedia.org/wiki/Airy_disk
 
Last edited:
  • #18
Another way to demonstrate the diffraction spikes are always the same shape… I take my Nikon D800 DSLR and insert a clip in narrowband Oiii filter in front of the sensor (which only allows one particular wavelength through to the sensor). Next I place a Bahtinov focusing mask in front of the objective lens of my telescope, which creates diffraction spikes.

5F3469C5-9CE0-4E27-A3EB-0EB616CEC19C.jpeg

8C6D1092-2572-4490-B30E-B1AC2AF22337.jpeg

418B7AD4-93E8-4A1D-A786-8AE000B91975.jpeg
 
  • Like
Likes collinsmark
  • #19
Devin-M said:
Of course the spacing will be the same but a fainter image will not produce such long spikes because lower amplitude dots are not recordable.

Are we all talking at cross purposes here, perhaps?
 
  • Like
Likes russ_watters
  • #20
andrew s 1905 said:
If you measure the size of the core of point sources image using the standard FWHM (Full Width at Half Maximum) you will get different results for different magnitude stars. Similarly if you measure the size of the diffraction spikes in the image using any normal technique the sizes will differ.

In your sense all images of objects are infinite in size as the diffraction artifacts are theoretically without bound !
Regards Andrew
This is not true for linear data.

As a matter of fact, that's the very reason why full width half maximum (FWHM) is such a useful parameter: So long as the star is exposed below the point of saturation, and so long as the star is exposed enough such that the noise floor is negligible -- in other words, so long as the star's exposure is within the linear region -- the star's FWHM is independent of the star's brightness (brightness due to its inherent magnitude or due to exposure characteristics). The star's FWHM doesn't change with brightness so long as the star's image is still linear.

That's why FWHM is such a good measure to be used by autofocus algorithms for telescope autofocusers.

In your sense all images of objects are infinite in size as the diffraction artifacts are theoretically without bound !

Yes, in principle that is what I am saying -- that is ideally, when the data is linear and we can ignore saturation and noise floors. Diffraction artifacts are in that sense without bound.
 
Last edited:
  • #21
When you factor photon shot noise, an individual photon from a dim star should have an equal probability of ending up in a given “diffraction point” as an individual photon from a bright star. For example, a single photon from a dim star could end up striking the sensor farther from the star’s apparent position due to shot noise and diffraction than a single photon from a bright star.

https://en.m.wikipedia.org/wiki/Shot_noise
Photon-noise.jpg

Photon noise simulation. Number of photons per pixel increases from left to right and from upper row to bottom row.

418b7ad4-93e8-4a1d-a786-8ae000b91975-jpeg.jpg
 
  • Like
Likes sophiecentaur
  • #22
In relevant ways, this is going to be a repeat of the points I made in my last post:

andrew s 1905 said:
It's not about how you process the image. If you restrict yourself to linear processing and don't saturate the image then a properly formulated measure of size will give the same result e,g. FWHM is an example. When discussing how big something is you have to specify how you will measure it.
Regards Andrew

Full Width Half Maximum (FWHM) is a prime example of what I'm talking about. In the linear domain, the FWHM of a star's image does not change with the star's brightness. The brightness might change, but the FWHM does not.

The same is true with diffraction artifacts. Brighter stars will produce brighter diffraction artifacts. But if you scale the brightness of the diffraction artifacts linearly with the brightness of the imaged star, nothing changes.

This of course assumes the [relevant objects in the] image is [are] in the linear region, such that saturation and the noise floor can be effectively ignored.
 
Last edited:
  • Like
Likes sophiecentaur
  • #23
collinsmark said:
This is not true for linear data.
Where would you expect to find "linear data"? As you point out, using half power radius is a useful tool because half power allows guiding with dim stars but that's not the same as claiming that spikes (which actually go on for ever, theoretically) will all be visible for all stars. Your data near the peak can be considered to be linear but it eventually ends up being non-linear

The bit depth of the cameras appears to be 16, which corresponds to a maximum relative magnitude of about 12, at which point the artefacts will disappear. That implies a star that is exposed at the limit will only display artefacts down to a magnitude of 12, relative to the correctly exposed star.
'Seeing' is excellent at a Lagrange point but scatter (artefacts, if you like) from nearby bright stars can limit how far down you can go and I suspect that that figure of 12 for relative magnitude could be optimistic. Signal to noise ratio can be eaten into by 'interference' / cross talk.

Signal processing can be a big help but that will take us away from a linear situation and could suppress nearby faint stars further.
 
  • #24
sophiecentaur said:
Where would you expect to find "linear data"? As you point out, using half power radius is a useful tool because half power allows guiding with dim stars but that's not the same as claiming that spikes (which actually go on for ever, theoretically) will all be visible for all stars. Your data near the peak can be considered to be linear but it eventually ends up being non-linear

The bit depth of the cameras appears to be 16, which corresponds to a maximum relative magnitude of about 12, at which point the artefacts will disappear. That implies a star that is exposed at the limit will only display artefacts down to a magnitude of 12, relative to the correctly exposed star.
'Seeing' is excellent at a Lagrange point but scatter (artefacts, if you like) from nearby bright stars can limit how far down you can go and I suspect that that figure of 12 for relative magnitude could be optimistic. Signal to noise ratio can be eaten into by 'interference' / cross talk.

Signal processing can be a big help but that will take us away from a linear situation and could suppress nearby faint stars further.
I believe the JWST is operating at 32 bit monochrome. At least that’s what the FITS files are. When you first open the file, the only part that’s non-linear is the “black spots” at the centers of the brightest stars. Whenever a pixel becomes saturated it turns black, counterintuitively. When you first open any of the files, all or almost all of them appear totally black on the computer monitor. In order to see any of the stars, galaxies and nebulae, you have to “histogram stretch” the data. That’s the point where the data becomes non-linear and processing comes into play. If you don't stretch the data you won’t see anything. That’s because JPG files and computer monitors only display 256 levels of brightness (8-bit per color channel) while 32 bit files record 4,294,967,295 levels of brightness, so when you stretch the data you’re essentially choosing which of the 4,294,967,295 colors to “throw away” because you can only keep 256 (and then you’re re-mapping those remaining 256 brightnesses to the ones your monitor can show). So basically the JWST records the data linearly but if you keep it that way you can’t actually “see” anything.
 
Last edited:
  • Like
Likes collinsmark
  • #25
collinsmark said:
The same is true with diffraction artifacts. Brighter stars will produce brighter diffraction artifacts. But if you scale the brightness of the diffraction artifacts linearly with the brightness of the imaged star, nothing changes.
Yes they have the same shape. On the image their dimensions differ. If you insist on scaling things of the same shape then they can be made to have the same size. Regards Andrew
 
  • #26
andrew s 1905 said:
On the image their dimensions differ.
How will you measure the difference in length when each star has diffraction effects all the way to the edge of the image frame? The diffraction of point sources occurs in the atmosphere & optical tube assembly, not in the star or the sensor.
 
  • #27
Devin-M said:
I believe the JWST is operating at 32 bit monochrome
That's what you get when you stack many different images. Stacking allows faint parts of an image to appear way down in the brightness distribution.
You make a valid point by bringing in the 32 bit issue and the limits of our displays. That could account for disappearing low level detail, after all, the cameras are wide field and we often only see a small portion of a full image.
 
  • #28
sophiecentaur said:
Where would you expect to find "linear data"?

All astrophotos start as linear. The analog-to-digital units (ADU) value of a pixel is a linear function of the number of photons striking that pixel, up to the point of saturation at the high end, and down to the quantization and noise at the low end. It means when you plot ADU vs. photon count, there's this big long line in the middle that's a straight line, i.e., linear.

This linear relationship will be sacrificed in the processing, when the nonlinear "stretch" is implemented, not to mention other nonlinear processing such as curves and contrast adjustments. Pretty much every deep sky astrophoto of a JWST image published to the media has a nonlinear stretch applied (in addition to other nonlinear operations), otherwise it wouldn't look like much of anything.

The characteristics of this stretch and curves are chosen subjectively, usually decided by the artistic whims of the person processing the data. And it is the result of these artistic choices that cause dimmer stars to appear point like and brighter stars to have spikes. Had the person made different artistic choices when processing the data, it would change how many stars have artifacts and how many do not.

But it all starts as linear (before the artistic, nonlinear stretch). Even at the dimmer side of the curve, things can still be treated as having a linear relationship with the addition of noise added on top. I.e., the relationship between actual signal and ADU is still linear even in the dimmer regions, in that sense.

sophiecentaur said:
As you point out, using half power radius is a useful tool because half power allows guiding with dim stars but that's not the same as claiming that spikes (which actually go on for ever, theoretically) will all be visible for all stars. Your data near the peak can be considered to be linear but it eventually ends up being non-linear
No. Even if the way that is worded isn't incorrect (which it might be), it's at least misleading.

The characteristics of the diffraction patterns, including the spikes, are not a nonlinear function of intensity. The relationship is linear, all the way down. (And even if we take sensor imperfections into considerations, it's still mostly linear, all the way down.)

Yes, of course there are limitations of the sensor. But that's not the point. We're talking about characteristics of the diffraction pattern (including the spikes) before they even strike the sensor.

The diffraction pattern itself is not a function of intensity (including the spikes). The shape and size of the pattern are independent of the number of photons reaching the detector. The only thing that changes as you increase the number of photons are the number of photons. That's it.

-----------

Let me explain it another way.

I'm sure you are familiar with the double slit experiment. That's the experiment where when light passes through a pair a slits, and an interference pattern is formed on the backdrop.

As I'm sure you know, the interference pattern is the same regardless of the photon rate. A moderately bright light source will produce the same pattern as the pattern produced if the photons were sent one at a time. Sending more photons only has the effect of more photons; the pattern itself doesn't change, the only difference is the number of photons.

JWST's optics (or the optics of any telescope, for that matter) work the same way. But instead of just having a pair of slits, you have mirrors, struts, hexagonal patterns in the primary mirror, and other obstructions. Taken together, when photons from a light source pass through these optics they too will form an interference pattern when striking the sensor.

And that interference pattern is the pattern we see, with the comparatively bright central spot, the somewhat dimmer hexagonal pattern surrounding it, and even the "diffraction spikes." These are all part of the same pattern that every last photon is subject to as it passes through JWST's optics.

One difference to the double slit experiment that should be noted is that in the case of the JWST, photons can come from slightly different directions (not all light sources are along the central axis), which shifts the corresponding interference pattern to differernt parts of the sensor, depending on the direction of the photon source.

But identical to the double slit experiment, photons from a given light source have the same interference pattern regardless of the photon rate. It doesn't matter if many photons pass through in quick succession or pass through slowly, one at a time. The pattern on the sensor is the same.

Claiming that characteristics of the diffraction pattern, including spikes, other than mere photon count, are a function of the star's brightness is simply false. It's akin to saying the interference pattern of the double slit experiment is a function of the photon rate. It's not. The claim is false.

sophiecentaur said:
The bit depth of the cameras appears to be 16, which corresponds to a maximum relative magnitude of about 12, at which point the artefacts will disappear. That implies a star that is exposed at the limit will only display artefacts down to a magnitude of 12, relative to the correctly exposed star.
'Seeing' is excellent at a Lagrange point but scatter (artefacts, if you like) from nearby bright stars can limit how far down you can go and I suspect that that figure of 12 for relative magnitude could be optimistic. Signal to noise ratio can be eaten into by 'interference' / cross talk.

Signal processing can be a big help but that will take us away from a linear situation and could suppress nearby faint stars further.

You lost me here.
  • Seeing is caused by the atmosphere, and JWST is not subject to seeing since it is outside the atmosphere.
  • Scattering? Are you suggesting light from far away stars is bouncing off nearby stars? I'm not sure what you mean here.
  • Interference and crosstalk? You really lost me on that. In conventional communication systems interference and crosstalk can be caused by nonlinear components in the system, but for JWST, the signal is completely linear all the way to the sensor, and then still mostly linear (saturation being an exception) up until the point of the artistic "strech" -- the stretch that's performed before the image is posted to the media. But before that, everything is linear.
 
Last edited:
  • Like
  • Informative
Likes pinball1970 and Devin-M
  • #29
collinsmark said:
All astrophotos start as linear.
I'm afraid there are too many points in your post to be addressed.

OK, I already accepted that the sensor itself is linear over a wide range and that stacking will allow a substantial increase in effective bit depth by averaging out random noise. I also know that the maths of diffraction goes on and on, as far down as you like. Real life is not like that. We always run out of range because noise and interference are present.

Do you have a reference about 32bit ADCs on the JWST? Wherever I have found a bit depth of the sensor arrays mentioned, it's been 16bits. Stacking can be achieved in many ways and they are all based on non linear processing to reject spurious data so I have to assume that, in fact, the linearity / purity of the images is restricted to 16 bits. JWST doesn't need defending and 16 bit data has been quite adequate as a source of fantastic and revealing images.
 
  • #30
sophiecentaur said:
Do you have a reference about 32bit ADCs on the JWST? Wherever I have found a bit depth of the sensor arrays mentioned, it's been 16bits. Stacking can be achieved in many ways and they are all based on non linear processing to reject spurious data so I have to assume that, in fact, the linearity / purity of the images is restricted to 16 bits. JWST doesn't need defending and 16 bit data has been quite adequate as a source of fantastic and revealing images.

As far as I can tell from the metadata of the JWST raw data most if not all of the observations are single exposures, not multiple stacked exposures. It does take multiple monochrome exposures each through different filters to produce a color image but that’s not the same as “stacking” to reduce noise. Since the downloadable images are calibrated, however, to remove noise it would make sense for them to perform those calibrations in as high a bit depth as possible, ie 32 bit, even if the sensors are only operating at 16 bit.

Accumulated analog charges are converted to 16-bit digital signals with up to 65,535 data counts (analog-to-digital units, or ADU) in each pixel. The gain values are roughly 2 electrons per data count (2 e–/ADU) on average, varying among the detectors as well as among pixels within each detector (see the Detector Performance article).

https://jwst-docs.stsci.edu/jwst-ne...rcam-instrumentation/nircam-detector-overview

Well capacity is equal to or greater than 65000…

https://www.teledyne-si.com/products/Documents/TSI-0855 H2RG Brochure-25Feb2022.pdf
 
Last edited:
  • #31
Isn’t the maximum diffraction angle of a photon from a dim star the same as the maximum diffraction angle of a photon from a bright star for the reasons stated in @collinsmark previous post?

collinsmark said:
But identical to the double slit experiment, photons from a given light source have the same interference pattern regardless of the photon rate. It doesn't matter if many photons pass through in quick succession or pass through slowly, one at a time. The pattern on the sensor is the same.
 
  • #32
Firstly an apology the FWHM is indeed not a function of star magnitude. I was mistaken it is a measure of "shape".

Yes the theoretical diffraction patter of a point source convoluted with the telescopes instrumental profile is infinite in extent but this is misleading.

The image is also the result of sampling by the detector and boxed both by its physical size and the length of the exposure. At some point as the intensity of the point source drops no photons will be captured in fainter parts of the point source image even with a noiseless detector. The image will either be finite and within the area of the detector and can be measurable using standard methods or be undefined if it extends to the detector edge.

This is true of both the core of the star image and the diffraction spikes. You can measure from the centroid to a point where the intensity of the object of interest falls to zero or often some defined fraction above the noise floor. This is routinely done in setting photometric apertures.

Regards Andrew
 
  • #33
Whatever the captured bit depth it would be normal to do the calibration in 32 bit arithmetic and save the result in 32 bit files.
Regards Andrew
 
  • Like
Likes sophiecentaur
  • #35
andrew s 1905 said:
Yes the theoretical diffraction pattern of a point source convoluted with the telescopes instrumental profile is infinite in extent but this is misleading.
It's very misleading because, in a real image there is not a point source and also there are a number of other sources in the vicinity of the low level parts of a diffraction spike. this constitutes a 'floor' which can be significantly above the least significant step in the ADC.
 
  • Like
Likes andrew s 1905
<h2>1. What is diffraction and how does it affect telescopes like the JWST?</h2><p>Diffraction is a phenomenon that occurs when a wave, such as light, encounters an obstacle or passes through a narrow opening. In telescopes, diffraction can cause light to spread out and create blurry images, reducing the overall image quality. This can be especially problematic for telescopes like the JWST, which have large primary mirrors that can cause significant diffraction effects.</p><h2>2. How does the design of the JWST help reduce diffraction effects?</h2><p>The JWST has a unique design that includes a large primary mirror made up of 18 hexagonal segments. These segments can be individually adjusted to maintain the correct shape, reducing the amount of diffraction that occurs. Additionally, the telescope is designed to operate at infrared wavelengths, which have longer wavelengths and are less affected by diffraction compared to visible light.</p><h2>3. Are there any other artifacts that can affect the images produced by the JWST?</h2><p>Yes, there are other artifacts that can affect the images produced by the JWST. One common artifact is known as "ghosting," which occurs when light reflects off the internal surfaces of the telescope and creates secondary images. The JWST has a specially designed baffling system to minimize this effect.</p><h2>4. Can diffraction effects be completely eliminated in telescopes?</h2><p>No, diffraction effects cannot be completely eliminated in telescopes. However, they can be minimized through careful design and calibration. The JWST has been extensively tested and calibrated to reduce diffraction effects as much as possible, but some level of diffraction will always be present.</p><h2>5. How do scientists account for diffraction effects when analyzing data from the JWST?</h2><p>Scientists use sophisticated software and algorithms to correct for diffraction effects when analyzing data from the JWST. This includes techniques such as deconvolution, which can help to sharpen images and improve their quality. Scientists also take into account the known diffraction patterns of the JWST and its instruments when interpreting data.</p>

1. What is diffraction and how does it affect telescopes like the JWST?

Diffraction is a phenomenon that occurs when a wave, such as light, encounters an obstacle or passes through a narrow opening. In telescopes, diffraction can cause light to spread out and create blurry images, reducing the overall image quality. This can be especially problematic for telescopes like the JWST, which have large primary mirrors that can cause significant diffraction effects.

2. How does the design of the JWST help reduce diffraction effects?

The JWST has a unique design that includes a large primary mirror made up of 18 hexagonal segments. These segments can be individually adjusted to maintain the correct shape, reducing the amount of diffraction that occurs. Additionally, the telescope is designed to operate at infrared wavelengths, which have longer wavelengths and are less affected by diffraction compared to visible light.

3. Are there any other artifacts that can affect the images produced by the JWST?

Yes, there are other artifacts that can affect the images produced by the JWST. One common artifact is known as "ghosting," which occurs when light reflects off the internal surfaces of the telescope and creates secondary images. The JWST has a specially designed baffling system to minimize this effect.

4. Can diffraction effects be completely eliminated in telescopes?

No, diffraction effects cannot be completely eliminated in telescopes. However, they can be minimized through careful design and calibration. The JWST has been extensively tested and calibrated to reduce diffraction effects as much as possible, but some level of diffraction will always be present.

5. How do scientists account for diffraction effects when analyzing data from the JWST?

Scientists use sophisticated software and algorithms to correct for diffraction effects when analyzing data from the JWST. This includes techniques such as deconvolution, which can help to sharpen images and improve their quality. Scientists also take into account the known diffraction patterns of the JWST and its instruments when interpreting data.

Similar threads

  • Astronomy and Astrophysics
Replies
25
Views
1K
  • Quantum Physics
Replies
4
Views
1K
  • Astronomy and Astrophysics
Replies
1
Views
1K
  • Astronomy and Astrophysics
Replies
1
Views
855
  • Science and Math Textbooks
Replies
2
Views
645
  • Astronomy and Astrophysics
Replies
28
Views
3K
Replies
1
Views
2K
  • Advanced Physics Homework Help
Replies
2
Views
1K
  • Astronomy and Astrophysics
Replies
5
Views
3K
  • Astronomy and Astrophysics
Replies
9
Views
3K
Back
Top