Why combine the JWST data physically instead of digitally?

  • Context: High School 
  • Thread starter Thread starter DaveC426913
  • Start date Start date
  • Tags Tags
    jwst
Click For Summary

Discussion Overview

The discussion revolves around the methods of combining data from the James Webb Space Telescope (JWST) segments, specifically whether to combine them physically or digitally. Participants explore the implications of each approach on image quality, noise reduction, and data management.

Discussion Character

  • Debate/contested
  • Technical explanation
  • Conceptual clarification

Main Points Raised

  • One participant argues that combining images physically leads to loss of individual data and suggests that keeping the segments' data separate allows for more flexibility in processing and stacking algorithms.
  • Another participant counters that physically combining the segments increases resolution and signal-to-noise ratio (SNR) compared to stacking separate exposures, as it reduces the noise generated by the camera sensor.
  • It is noted that the imaging sensor records intensity rather than phase information, which limits the ability to combine images digitally with the same detail as optical combination.
  • Some participants express confusion about the advantages of physical optimization over digital methods, questioning whether noise from separate images could aid in distinguishing signal from noise.
  • One participant emphasizes that optimizing physically is generally superior to digital optimization, citing practical limitations in constructing large physical systems.
  • Another participant elaborates on the SNR benefits of a single exposure from combined segments versus stacking multiple images, indicating that the noise characteristics differ significantly between the two methods.

Areas of Agreement / Disagreement

Participants express differing views on the merits of physical versus digital data combination, with no consensus reached on the best approach. Some support physical combination for its advantages in resolution and noise reduction, while others advocate for the flexibility of digital methods.

Contextual Notes

Participants discuss the limitations of the imaging sensor's capabilities, particularly regarding phase information, and the implications of noise in the context of astrophotography. The discussion highlights the complexity of balancing physical and digital optimization without resolving the underlying technical details.

DaveC426913
Gold Member
2025 Award
Messages
24,313
Reaction score
8,468
TL;DR
Why align them physically? Why not softwarily as needed?
When I'm Photoshopping or film editing or 3D modeling, , I wait until the last possible moment to combine anything. Once combined, the individual data would be lost.

Why bother physically combining the images from JWSTs segments? Why not receive the data as 18 channels, store them separately and stack them digitally at our leisure in the air conditioned comfort of our offices? Then we can pick and choose - and even replace - our stacking algorithms.

It seems to me that it's the equivalent of flattening all my .PSDs into GIFs before saving. Or throwing away my .BLEND files and only keeping the exported .STLs.

I can see an 18-fold reduction in bandwidth, but otherwise...
 
Astronomy news on Phys.org
The segments operate together as a single larger mirror to create a single image. For starters, this increases resolution because a larger mirrors allows for a smaller diffraction pattern (airy disk) at the image plane which generates a higher quality image. It also increases SNR of the images compared to simply stacking 18 separate exposures, as the camera sensor itself adds noise to each image. With a single exposure you only have a single round of noise generation, not 18.
 
  • Like
  • Informative
Likes   Reactions: vanhees71, Oldman too, berkeman and 1 other person
Because the imaging sensor only records intensity, not phase information. That means that you cannot post-combine the eighteen images with the same level of detail you get from optically combining them (you can only add ##|A(x,y)|^2##, not ##A(x,y)\sin(\phi(x,y))##). As Drakkith says, you effectively get eighteen small telescopes instead of one big one.

In radio frequency our electronics are fast enough to record phase, so we can combine telescope images from separate telescopes. That's what Very Long Baseline Interferometry is.
 
  • Like
  • Informative
Likes   Reactions: collinsmark, Oldman too, DaveE and 2 others
I don't doubt you, I just don't grok it.

Surely:
  • Anything that can be optimised physically at the start can be better optimized digitally downstream.
  • Noise in 18 separate images is the way to know what's noise and what isn't. That's why we stack separate images in astrophotog.
No?
 
Ibix said:
Because the imaging sensor only records intensity, not phase information. That means that you cannot post-combine the eighteen images with the same level of detail you get from optically combining them...
Ah! The phase.

But why can't you sync the signals like we do with Very Lon...
Ibix said:
In radio frequency our electronics are fast enough to record phase, so we can combine telescope images from separate telescopes.
Ah! Because radio waves have way low freqs. Got it!
 
Last edited:
  • Like
  • Informative
Likes   Reactions: Oldman too, BillTre and Ibix
DaveC426913 said:
Anything that can be optimised physically at the start can be better optimized digitally downstream.
Absolutely not. In fact the reverse is true. Anything you can optimize physically is better than digitally. Unless it costs too much. Hence why we don't have giant radio antennas that are kilometers across. They'd be better than moving smaller telescopes during aperture synthesis, but they are far, far too large to be practical.
DaveC426913 said:
Noise in 18 separate images is the way to know what's noise and what isn't. That's why we stack separate images in astrophotog.
Unfortunately stacking doesn't solve the noise issue. Your SNR will increase roughly proportional to the square root of the number of images stacked. So with 18 images your SNR will be about 4x better than the single images. But a single image with 18x the light has an SNR that's only affected by one round of sensor noise and isn't subject to the square root rule (in regards to sensor noise only). If there were no other sources of noise in the image then it would be up to 18x better in SNR than a single exposure from an individual mirror segment. In practice the gains aren't as great because of other noise sources of course.

I hope all that makes sense. I'm very tired and about to go to bed. o0)
 
  • Like
Likes   Reactions: Oldman too, DaveE, BillTre and 2 others
Drakkith said:
Absolutely not. In fact the reverse is true. Anything you can optimize physically is better than digitally. Unless it costs too much.
This is why in the movies, physical effects (like Ridley Scott often uses) often look better than the computer generated effects.
 
  • Informative
Likes   Reactions: Oldman too

Similar threads

  • · Replies 1 ·
Replies
1
Views
4K