Rayleigh criterion when light phase is known

Click For Summary

Discussion Overview

The discussion revolves around the Rayleigh criterion in optics, particularly in the context of measuring resolution and the potential to distinguish between two closely spaced light sources when their phase information is known. Participants explore theoretical implications and practical applications related to coherent detection and imaging techniques.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • One participant questions whether a detector with a high frequency response would show oscillating intensity from light, suggesting that typical observations may reflect an average intensity due to human limitations.
  • Another participant proposes that if the two light sources are at different spatial locations, overcoming the Rayleigh criterion would require phase measurements at various distances or angles.
  • A different participant mentions that the concept of coherent detection relates to the Rayleigh criterion and that the relative phase between coherent sources can influence resolution capabilities.
  • One participant provides a detailed explanation of the Rayleigh criterion, noting its origins and how it applies to independent incoherent sources, while also discussing factors like signal-to-noise ratio that can affect resolution beyond the criterion.

Areas of Agreement / Disagreement

Participants express differing views on the implications of phase knowledge in overcoming the Rayleigh criterion, with some suggesting it is possible under certain conditions while others emphasize the limitations and specific contexts in which the criterion applies. The discussion remains unresolved regarding the extent to which phase information can definitively allow for resolution beyond the criterion.

Contextual Notes

Participants highlight the dependence on definitions of coherence and the specific conditions under which the Rayleigh criterion is applied, as well as the influence of signal-to-noise ratios on measurement outcomes.

Adgorn
Messages
133
Reaction score
19
Hi everyone,
this is sort of a soft question which I need to ask to make sure my understanding is correct, it relates to a little project I'm doing on measurement resolution. The first question is to clear up a general concept, the second is based on the first and is the actual question.

First, when light is directed on a detector, what is seen is a "patch" with a certain shape and a fixed intensity. However, being an electromagnetic wave, the magnitude of the field oscillates between 0 and a certain amount at a really fast rate. So if we had a theoretical measuring device capable of measuring the light hitting it at a frequency of, say, a quadrillion hertz, and we direct visible light at it, will the detector show a "patch" with oscillating intensity? If so, is the fixed intensity seen normally just our puny mortal eyes seeing the average of the intensity due to a lack of precision?

Now, assuming the answer for the above is somewhat positive, it's time for my real question regarding the Rayleigh criterion. The criterion (when relating to optics) says that if 2 given light sources are too close, or to be more exact, their 2 projections are too close, it will be impossible to tell whether said projection is a result of a single light source or 2 close light sources. My question is whether that would be the case if we knew the phase of the 2 light sources at all times.

For example, say we project 2 light sources with the same frequency ##f## and amplitude through a slit so each source creates a nice interference pattern, but since the sources are closer than the Rayleigh criterion, the peaks of the 2 patterns merge into what seems like a single peak. But say we have our super-accurate measuring device, and for convenience let's also say that the phase difference between the 2 sources is exactly ##\frac \pi 2##. If the assumption of the first question is true, then when the first signal is at its peak intensity, the second will be at its minimum (which is perhaps 0), this means that the peak of the first signal will be visible and much more prominent than the second peak. ##\frac 1 {4f}## seconds later, the opposite happens, so the second peak will be visible and the first will not. Clearly in this situation one will be able to tell whether the projection is a result of 2 sources or just 1, depending whether or not the main peak changes location every ##\frac 1 {4f}## seconds. This can also work when the phase difference is pretty much anything other than 0, although maybe to a lesser extent.

So, when the phase of 2 light sources is known and the difference between the phases is not 0, is it possible to overcome the Rayleigh criterion?
 
Science news on Phys.org
Can we assume the 2 light sources are at different points in space? If that's the case then, also assuming the emitted light waves are spherical, to overcome the Rayleigh criterion you would need to take a series of phase measurements at different distances from the sources, or at different angles, wouldn't you?
 
Adgorn said:
So, when the phase of 2 light sources is known and the difference between the phases is not 0, is it possible to overcome the Rayleigh criterion?
I believe this is the basis of aperture synthesis and suggest you track that down. Let me know the result!
 
Adgorn said:
<snip>
So, when the phase of 2 light sources is known and the difference between the phases is not 0, is it possible to overcome the Rayleigh criterion?

Most of your questions are easily answered in terms of millimeter-wave imaging (or radio astronomy), because phase-sensitive detectors are in existence. These detectors are primarily 'point detectors', meaning there is only a single pixel, but there are research efforts to construct array detectors.

What you are asking about is known as 'coherent detection', and the Rayleigh criterion is indeed affected by the relative phase between two separated mutually coherent point sources. Note, different stars are mutually incoherent sources.
 
  • Like
Likes   Reactions: sophiecentaur and hutchphd
Adgorn said:
it's time for my real question regarding the Rayleigh criterion
Before going any further with this, we need to know what the Rayleigh Criterion is, exactly. The RC was devised as a rule of thumb (pretty arbitrary but convenient) to apply to two independent (incoherent) light sources (e.g. equal brightness stars) to decide whether you would say they could be resolved (visually) as two sources and not a single source. Each source will produce a diffraction pattern when it passes through an aperture (originally an astronomical Telescope, I think). The (brightness) pattern for the two stars will be a smooth dip between one maximum and the condition for the Rayleigh Criterion is when the minimum of brightness at the mid point falls to half power. If the optics has been well charcterised and the signal to noise ratio is good (good 'seeing' with low light pollution) you can do much better if your imaging array is good and high res enough. That 'saddle' curve doesn't need to dip to half power if the stars are bright enough; you can resolve with a much shallower dip as long as the 'noise' doesn't fill it in. Stacking multiple images can suppress the noise. There is a lower limit of star visibility below which you cannot do better than the Rayleigh criterion - in fact you will do worse! It's the same with all imaging / measurements; signal to noise ratio is what really counts.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 22 ·
Replies
22
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 5 ·
Replies
5
Views
4K
  • · Replies 4 ·
Replies
4
Views
2K