Is there a limit to the amount of info in reflected light?

In summary, the amount of detail in reflected light decreases as the distance between the observer and the object increases. However, if we use technology that has not yet been developed, it is possible to see smaller objects than what is possible with current technology.
  • #36
mfb said:
The comparison is a bit unfair, as VLBI looks for coherent sources of radiation.
I don't think this is true (but I could be wrong) - it depends what you mean by "coherent". VLBI accepts and interferes a large band of microwave frequencies, not just a single laser-like source. True, it is a very narrow band compared to light, but not particularly narrow for its microwave regime. The two sources are "coherent" simply because they both come from the same small area of the sky.
mfb said:
... a bacterium is not a laser - it won't emit coherent radiation.
Provided the light is being emitted from the same (small) source area, I think it must be coherent? White light coming from a single-slit (or a bacterium sized pin-hole !) is coherent enough to interfere when passed through a following double-slit. Incredibly as it may seem, light from opposite sides of a distant star is also coherent enough to be interfered (try explaining that!).
mfb said:
A VLT-like telescope ...
Yes! A VLT-like telescope in space with very large separation between the detecting apertures was one idea I had in mind. Thanks for your support! Another possibility would be to mix the received light with that from a laser comb generator so as to "mix it down" to a bunch of microwave frequencies, after which it can be digitized and interfered with its partner detector(s) computationally - as is done with VLBI. I believe this approach is somewhat within reach of today's technology.

As mentioned previously, the angular resolution of such an array is governed by Rayleigh's criterion where the "diameter" in the equation is the separation of the satellites.
 
Science news on Phys.org
  • #37
The bacterium emits incoherent light - the coherence length is of the order of the wavelength of the light itself. If we combine the light of multiple mirrors with the same path length, then we can get interference. Get the path length wrong and you won't gain anything from the combination (apart from more light).
This is different with VLBI, where you can record the phase, store it digitally and then combine it in a computer. You won't be able to do that with incoherent visible light, try to measure the phase and you ruin coherence between mirrors.
 
  • #38
mfb said:
The bacterium emits incoherent light - the coherence length is of the order of the wavelength of the light itself. If we combine the light of multiple mirrors with the same path length, then we can get interference. Get the path length wrong and you won't gain anything from the combination (apart from more light).
This is different with VLBI, where you can record the phase, store it digitally and then combine it in a computer. You won't be able to do that with incoherent visible light, try to measure the phase and you ruin coherence between mirrors.
I don't believe it is any different to VLBI. In the case of VLBI you record the wideband (= incoherent by your definition?) microwave signal at two or more globally spaced locations together with an atomic clock signal to allow them to be resynchronized for interference calculations later on. The path length is adjusted to be identical just by getting the synchronization exactly right. Neighbouring pixel data is obtained by stepping the synchronization very slightly for each interference accumulation calculation. As I recall you don't even have to know the synchronization very exactly to start with - just sweep it through the right region and the image jumps out at you when you get it right! An extremely stable clock signal is necessary however because the signal is very weak and noisy and so has to be coherently integrated for a relatively long time.

The process of "mixing" light down by multiplying it with very stable carrier light source (laser comb) converts it into the microwave realm while preserving the phase and gives one a synchronizing signal to allow the "path length" to be adjusted exactly right in later calculations. At this point the signal can be recorded electronically and exactly the same process as is done for VLBI can be followed to obtain the image for each microwave bandwidth limited record. Integrating the many images for each small (microwave wide) segment of the optical bandwidth will give a final complete optical image.
 
  • #39
If the microwave signal is incoherent, adding it does not increase the resolution, because the phases your different telescopes measure are not correlated then.
 
  • #40
Since VLBI obviously works, the microwave signal arriving at single-sensor-detectors, positioned at globally spaced locations, and pointed at a distant patch of sky that is many thousands of light-years wide must then be coherent. No?

Since white light from a slit (much bigger than a bacterium), that then diverges into two slits, can then be brought back together to interfere. It must also be coherent?

Since the light from a bacterium that diverges widely into a microscope objective lens, can then be brought back together (constructively and destructively combining) to form an image, all of those rays must also be coherent!
 
  • #41
tech99 said:
A radio telescope cannot resolve something smaller than its beamwidth (however that is defined).
The Aperture is a major part of what governs the resolving power but a radio telescope can be much better than just a paraboloid reflector. In fact, the disadvantage that radioastronomy has, due to the large apertures required is partially offset by the fact that the amplitude and phase of microwave signals can be dealt with by electronics and the result is that a given aperture can produce better results. (They punch above their weight) So I would say that the above statement is probably more appropriate for optical telescopes than for radio telescopes.
 
  • #42
jwinter said:
Since VLBI obviously works, the microwave signal arriving at single-sensor-detectors, positioned at globally spaced locations, and pointed at a distant patch of sky that is many thousands of light-years wide must then be coherent. No?
The sources must emit coherent radiation. There can be many individual sources emitting radiation that is coherent. Like many individual lasers.
jwinter said:
Since white light from a slit (much bigger than a bacterium), that then diverges into two slits, can then be brought back together to interfere. It must also be coherent?
No - with sunlight (for example) you have to be very careful and overlap the actual radiation (not your measurement values) to see interference. VLBI would not work that way, you would need a worldwide network of RF waveguides, carefully designed to avoid losing coherence.
jwinter said:
Since the light from a bacterium that diverges widely into a microscope objective lens, can then be brought back together (constructively and destructively combining) to form an image, all of those rays must also be coherent!
No, it is similar to the white light.
 
  • #43
mfb said:
To lose information, you would need several different initial states to end up in the same final state. It would also be a direct violation of CPT symmetry. If you find any situation like that, go and take the Nobel Prize(s)!

Now you are just being unreasonable. Google 'diffusion' and "Gibbs paradox". The loss of information is non-reversible.
 
  • #44
DavidReishi said:
I'm sorry if it seemed like that.
No, no, no, and no. Are those decisive issues in our 10 ft petri-dish demo? If so, do you mind telling me how?

I'll take you at your word- let's start over, ok?

First, yes- those three related buzzwords are fundamental concepts that directly address your ability to sufficiently (accurately?) image an object, regardless of your method of imaging. So let's start there, and proceed a little bit at a time.

Please carefully define what "the visual information of the dish's form" means. I'll start you off- "the dish's form is represented by a 3-D optical field, created when incident light illuminates and scatters off of the dish. The 3-D field can be modeled with Kirchhoff's diffraction formula, considering the dish as the illuminated aperture". Now you go from there to "the visual information of the dish's form".

Once you have a quantitative way to describe that information, consider the concept of 'angular spectrum', and think about how that relates to diffraction. What kind of information is diffracted into large angles? What kind of information is diffracted into small angles? As a related topic, think about what Laue/Bragg patterns are and how those images are used to obtain information about crystal structure.

After you have done that, then use Abbe's or Rayleigh's limit to calculate how much of the scattered light, diffracting into the full hemisphere, must be collected to resolve various aspects of the visual information of the dish's form.

That's enough for now, I think...
 
  • #45
jwinter said:
I accept you would need more than just lenses since the light from the widely separated detector apertures needs to be combined in a manner which can interfere constructively and destructively (ie phase preserving).

This is exactly what I posted in #5 and #13 of this thread. Glad we are in agreement!
 
  • #46
Andy Resnick said:
Now you are just being unreasonable. Google 'diffusion' and "Gibbs paradox". The loss of information is non-reversible.
It is not a loss of information. It is a loss of accessible information. If you can prove otherwise, go and get the Nobel Prize. Seriously.

No-cloning theorem and no-deleting theorem tell us that information fundamentally is conserved, and all experiments are in agreement with that.
 
  • #47
mfb said:
The bacterium emits incoherent light - the coherence length is of the order of the wavelength of the light itself. If we combine the light of multiple mirrors with the same path length, then we can get interference. Get the path length wrong and you won't gain anything from the combination (apart from more light).
This is different with VLBI, where you can record the phase, store it digitally and then combine it in a computer. You won't be able to do that with incoherent visible light, try to measure the phase and you ruin coherence between mirrors.
I think the problems mentioned for for incoherent light are not fundamental but just problems of implementation of the telescope, which relies on memory. In a general case, if we consider a source which is modulated with noise, a conventional antenna array can still image it because all elements of the array receive the signal with identical modulation envelope but just the "carrier" phase differs over 0 - 360 degrees depending on direction.
For an object smaller than the resolution of the telescope, the fact that different parts of the object's surface radiate incoherently is not important, because the distant telescope sees the vector sum, and so it sees a single noise modulated source. For example, a filament lamp can be located by a telescope.
 
  • #48
tech99 said:
For example, a filament lamp can be located by a telescope.
Sure it can be located, but you don't gain much in angular resolution if you take multiple pictures of it at different locations and combine them later (assuming the filament lamp is located so far away that triangulation does not work). To gain in resolution you need the light coming from the lamp to interfere while taking a single combined picture.
 
  • #49
tech99 said:
I think the problems mentioned for for incoherent light ...
There is no problem with incoherent light because light from the same small area, or small angle of view, is coherent. It just has a short coherence length - of order of the wavelength of light (or of microwaves if we are doing VLBI) as has been pointed out by others. But that doesn't prevent interference. As we well know that when light is focussed (so that divergent rays are brought back to the same spot over equal path length) we get a good image - ie the rays interfere.

There is not a laser (or maser) in space for every pixel on a VLBI image! The microwaves from every pixel area in space is just as "incoherent" as the light from every pixel area on a bacterium. If you "mix" (as in heterodyne - which is phase preserving) light frequencies down to microwave (with a very stable laser comb), then in principle the same process can be done with light as is routinely done with microwaves, and whatever is done obviously works. Others will have to work out how it works for themselves because I am tired of trying to explain things on this thread.
 
Last edited:
  • #50
mfb said:
INo-cloning theorem and no-deleting theorem tell us that information fundamentally is conserved, and all experiments are in agreement with that.

Sigh... the experiments discussed here are inverse scattering problems; the above don't apply when the initial state can not be completely specified.
 
  • #51
But we can specify the initial state, if we like. In theory only, of course.
jwinter said:
The microwaves from every pixel area in space is just as "incoherent" as the light from every pixel area on a bacterium.
If the physics arguments don't convince you, what about the actual astronomy done? If VLBI would work with sources of visible light, how stupid would astronomers have to be to not use it? Using the Earth as baseline would increase the baseline by 6 orders of magnitude compared to current telescopes, and 4.5 orders of magnitude compared to VLT interferometry.
 
  • #52
The limiting resolution is something like λ/NA where the numerical aperture is lens diameter divided by distance.
The NA drops to astronomically small values for telescopes. In principle, with a very large telescope or interference telescopy NA could be increased, but requirements on phase coherence will impose a technical limit.
The light source does not have to be coherent, but the different parts of the telescope must have a well defined phase relation.
 
  • #53
mfb said:
If the physics arguments don't convince you, what about the actual astronomy done? If VLBI would work with sources of visible light, how stupid would astronomers have to be to not use it? Using the Earth as baseline would increase the baseline by 6 orders of magnitude compared to current telescopes, and 4.5 orders of magnitude compared to VLT interferometry.
The phyics arguments are unconvincing because there are none (at least none that I have seen or thought of). But the technology problems should be obvious. Optical frequency combs with sub-Hertz stability have only become possible in the last few years, and even if the stability reached is sufficient (I don't know if it is), the challenges are horrific. Think how many microwave (ie gigahertz) bandwidths there are in a 500 terahertz light signal! And all of the bands have to be split out and processed separately with individual optical detection and VLBI type electronics all operating in parallel. Astronomers are not stupid - they propose things that can be built, not things that are still out of reach.
 
  • #54
Seems to me like jwinter mostly answered this question in post 17. The original question wasn't about what information was resolvable (which is what most posts are addressing) but what information is actually there. The only thing I would add to jwinter's answer is that while you may just be able to wait long periods of time to collect enough photons to have enough information to extract any arbitrarily small detail (limited by the wavelength of light), you would, at the same time, loose time resolution. If the bacteria was moving, you would not ever be able to get a clear picture of it.
 
  • Like
Likes jwinter
  • #55
mfb said:
But we can specify the initial state, if we like. In theory only, of course.

This is exactly how *blind* deconvolution works, which I mentioned very early in this thread as a non-optical method of reconstructing information from a blurry image. Furthermore, you keep ignoring the fact that propagation of light through the Earth's atmosphere irreversibly decoheres the light. And it is indeed irreversible. If you keep claiming otherwise, then Google 'seeing', and try to explain why astronomers have spent so much time and effort on this problem. Hint: why is VLBI easy in radio but hard in visible?
 
  • #56
mrspeedybob said:
Seems to me like jwinter mostly answered this question in post 17. The original question wasn't about what information was resolvable (which is what most posts are addressing) but what information is actually there.

I have tried to clearly answer how information degrades as light propagates through a turbulent atmosphere (posts 5, 13, 26). This loss of information is irreversible.

Performing calculations is difficult, but measurements of the information loss can be done- this is an excellent reference:
https://www.repository.cam.ac.uk/handle/1810/251667
 
  • #57
Andy Resnick said:
If you keep claiming otherwise, then Google 'seeing', and try to explain why astronomers have spent so much time and effort on this problem.
It is impossible in practice, but not in theory. If astronomers could know the precise state of the atmosphere, seeing could be corrected perfectly. They do not, so it can only be corrected approximately (which is done already). You keep ignoring the difference between theory and actual applications, and as long as you do that I don't see how this discussion could make progress. Go get your Nobel Prize if you can prove time evolution is not fundamentally symmetric. Probably my last post on that subtopic here.
 
  • #58
What @mfb is alluding to is that according to quantum mechanics, information cannot be destroyed. The evolution of quantum systems being unitary, and considering that everything is a quantum system, means that information is never lost. This is the core issue in the black hole information paradox, since black holes appear to destroy information. There have been a few threads on the subject, especially following Hawking's recent proposal to solve the paradox.

That said, from a practical point of view, as @Andy Resnick has been saying, information can be so hard to recover that it is lost FAPP. But that is, in a sense, a technical limit, not a fundamental one. It appears when, for instance, we trace away the environment the light has interacted with, because we simply can't completely describe the state of the environment, due to the number of particles involved. But if we could know the full quantum state, we would find that the information is still there.
 
  • #59
Andy Resnick said:
Please carefully define what "the visual information of the dish's form" means. I'll start you off- "the dish's form is represented by a 3-D optical field, created when incident light illuminates and scatters off of the dish. The 3-D field can be modeled with Kirchhoff's diffraction formula, considering the dish as the illuminated aperture". Now you go from there to "the visual information of the dish's form".

Are you sure it requires all that? What I meant is that it wouldn't seem to make sense to hold that, from 10 feet away, the visual information of the bacteria isn't hitting my face but is scattered too far and wide, if the "visual information of the dish's form," i.e., that info that allows my brain to form a clear, crisp, hard-edged image of the dish, has no problem making it into my small pupils.

mrspeedybob said:
Seems to me like jwinter mostly answered this question in post 17. The original question wasn't about what information was resolvable (which is what most posts are addressing) but what information is actually there. The only thing I would add to jwinter's answer is that while you may just be able to wait long periods of time to collect enough photons to have enough information to extract any arbitrarily small detail (limited by the wavelength of light), you would, at the same time, loose time resolution. If the bacteria was moving, you would not ever be able to get a clear picture of it.

From your words, one might think that satellite images of Earth in which the human form is visibe aren't practically possible. Is it merely the difference in scale between a person's head and a person's skin cell that would necessite long periods of photon collection and loss of information due to movement?

Andy Resnick said:
...[P]ropagation of light through the Earth's atmosphere irreversibly decoheres the light. And it is indeed irreversible.

Again, doesn't satellite imagery containing the human form prove this to be a non-issue? Or are you saying that the decohering effect of Earth's atmosphere comes into play only regarding smaller visual details?

DrClaude said:
What @mfb is alluding to is that according to quantum mechanics, information cannot be destroyed. The evolution of quantum systems being unitary, and considering that everything is a quantum system, means that information is never lost.

What information are you referring to when you say that, according to quantum mechanics, information cannot be destroyed?
 
  • #60
mfb said:
If VLBI would work with sources of visible light, how stupid would astronomers have to be to not use it?

Read up on the use of the twin Keck scopes and also using them combined with the scopes in South America at the ESO's Andes site

just one small snippet

Another approach to increasing angular resolution is to interferometrically combine the light from two or more telescopes. In the field of large telescopes, this is being done with the twin Keck telescopes 4,5 and the four 8m telescopes of the European Southern Observatory. The 85m Keck baseline provides an angular resolution of 5 milliarcseconds at a 2μm wavelength. The AO-corrected light from each telescope is relayed through a series of mirrors to the basement between the two, where the optical paths are matched before the light is interfered. The available science mode consists of fringe-visibility measurements on a near-IR camera. The contrast of the interference fringes is used to determine the size, or other characteristics, of the object being studied.
http://spie.org/newsroom/technical-...esolution-at-keck-observatory?highlight=x2418Dave
 
  • #61
@davenn: That is a description of what I said, thanks for confirming it, but what should I read why?

Interferometry is done between the Keck telescopes, with mirrors for the actual light to interfere. Interferometry is done between the VLT telescopes, same concept. There is no interferometry combining all those telescopes. You can add the images from the different sites to get more statistics, but that is something different.
 
  • #62
DavidReishi said:
From your words, one might think that satellite images of Earth in which the human form is visibe aren't practically possible. Is it merely the difference in scale between a person's head and a person's skin cell that would necessite long periods of photon collection and loss of information due to movement?
Essentially, yes. If there are a billion skin cells on top of the head, then the head as a whole will reflect a billion times more photons per unit of time. Your sensor will need a certain number of photons to resolve an image. If it's zoomed out to the scale of a head it will get them a billion times faster then if it is zoomed into the scale of a skin cell.

DavidReishi said:
Again, doesn't satellite imagery containing the human form prove this to be a non-issue? Or are you saying that the decohering effect of Earth's atmosphere comes into play only regarding smaller visual details?
Atmospheric distortion is absolutely an issue. Seeing people is relatively easy, but as you zoom into smaller and smaller scales, it becomes more and more of an issue. Actually resolving skin cells from space may be technically impossible. You'd have to account for refraction through every tiny temperature and composition gradient, as well as diffraction past every dust particle. IF you could track all the billions of such obstacles and computationally compensate for them, or activate some tractor beam to move them out of the way, then you could see skin cells. This falls into the realm of what may be theoretically possible in the far future, a concept you brought up in post 9. If we move the discussion to a planet with no atmosphere and a perfectly uniform gravitational field, then get down to just the distance question, which is all jwinter and I were really addressing in our posts.

DavidReishi said:
What information are you referring to when you say that, according to quantum mechanics, information cannot be destroyed?
Not just quantum mechanics. Conservation of information is pretty fundamental to every branch of physics. It's essentially a re-statement of the 2'nd law of thermodynamics, that entropy must increase. If any 2 initial states of a system evolved into the same state, then the entropy of the system would have decreased.
 
  • Like
Likes mfb
  • #63
DavidReishi said:
Are you sure it requires all that? What I meant is that it wouldn't seem to make sense to hold that, from 10 feet away, the visual information of the bacteria isn't hitting my face but is scattered too far and wide, if the "visual information of the dish's form," i.e., that info that allows my brain to form a clear, crisp, hard-edged image of the dish, has no problem making it into my small pupils.

You are right- it doesn't have to be that complicated. Usually it's a lot easier and more qualitative. The easiest method is to use the coherency matrix: if the value is exactly 1, you can perfectly reconstruct the optical field at the source. If it's less than 1, there is uncertainty in the detected field as compared to the 'truth'. When the coherency matrix is zero, the optical field has become totally randomized. There are rules that the coherency matrix obeys during the propagation of light.

The references I have posted, especially the ones by Emil Wolf, spell all this out in detail. They are worth reading.

It can be shown that when an optical field propagates through random media (i.e. clear air turbulence), the coherency matrix decreases in value. Therefore, as light propagates through the atmosphere, through the ocean, through milk, through skin, it rapidly becomes no longer possible to perfectly reconstruct the object field because the coherency matrix is less than 1.

DavidReishi said:
From your words, one might think that satellite images of Earth in which the human form is visibe aren't practically possible. Is it merely the difference in scale between a person's head and a person's skin cell that would necessite long periods of photon collection and loss of information due to movement?

Well, here again it depends on what you mean- KH-11 satellites can detect objects on Earth as small as a grapefruit.

https://en.wikipedia.org/wiki/KH-11_Kennan

That doesn't mean it can image you with grapefruit-sized blurry Airy disks. But I have been careful not to state you can't see people from space. We can't see aliens on other planets. Whenever light propagates through disordered media there is a progressive loss of coherence. The rate of loss depends entirely on the specifics.

In any case, imaging people on Earth with satellites is dumb now- everyone uses drones. Humans can be well imaged with those.

DavidReishi said:
Again, doesn't satellite imagery containing the human form prove this to be a non-issue? Or are you saying that the decohering effect of Earth's atmosphere comes into play only regarding smaller visual details?

That's close to what I mean. Certainly, in order to image smaller details, the aperture has to get larger- either a monolithic aperture or a synthetic aperture, like VLBI. But there's another limit on how large the aperture can be before you simply stop gaining spatial detail, and that size is set by the coherence- once the light hitting different parts of the aperture is mutually incoherent, it cannot contribute additional information to the final image. The details of the coherence: the rate of loss in time and length scales depends entirely on the specifics.
 
Last edited:
  • #64
davenn said:
Read up on the use of the twin Keck scopes and also using them combined with the scopes in South America at the ESO's Andes site
just one small snippet
Dave

Nobody denies the existence of optical interferometers. I'm talking about the limitations on interferometry, for example: what is the maximum path difference that is achievable using a Mach-Zender interferometer? What is the maximum pinhole separation that can be achieved with a Young interferometer? Lots of variables impact these calculations: not just the spectral bandwidth or spatial bandwidth but also what happens to the coherence state as it propagates through a disordered medium like the atmosphere.

It's should not be surprising that an infinite path difference or pinhole spacing can only be achieved in artificial, idealized, conditions. The existence of maximum path length differences and pinhole spacings reflects the loss of coherence, which is how information is stored in the optical field. Emil Wolf's references that I posted earlier spell this out.
 
Last edited:

Similar threads

Replies
4
Views
7K
  • Sci-Fi Writing and World Building
Replies
21
Views
1K
  • Special and General Relativity
Replies
15
Views
1K
  • Sci-Fi Writing and World Building
Replies
4
Views
2K
  • Sci-Fi Writing and World Building
Replies
15
Views
3K
Replies
13
Views
3K
Replies
3
Views
3K
  • Special and General Relativity
Replies
16
Views
1K
Back
Top