Qualitative description of photon from faraway star

In summary, photons are electromagnetic waves that are characterized by the oscillation of the E and B fields. A single photon traveling through vacuum for millions of years will have its energy concentrated just in front of the CCD detector before being absorbed. The absorption of the photon can be explained through quantum mechanics or by assuming that the photon has a specific direction, but this does not align with the nature of photons produced by stars.
  • #1
birulami
155
0
A photon is (described by) an electromagnetic wave, meaning the oscillation of the E and the B field.

Now there is this single photon that comes from a faraway star and hits the CCD detector of some astronomical satellite. As an approximation consider that it traveled through vacuum for the last several million years since it was send out by some start.

Just before the photon shoves it energy into the CCD and is recorded, what is the qualitative description of the E and the B field. I would be interested in statements like

- The fields are nearly/completely zero outside a radius of ... around the position just in front of the CCD.
- 99% of the fields' energy is concentrated just in front of the CCD.

Or, to the contrary something like.

- The fields are spread far over the cosmos, roughly in a sphere around the star that emitted the photon.
- Only when the photon is actually registered by the CCD, the collapse of the wave function instantly sucks the whole energy, just previously spread over this sphere, into the CCD.

Or any other description that gives a qualitative idea of where this photons is "located" just before it hits the CCD.

Thanks,
Harald.
 
Physics news on Phys.org
  • #2
I'd like to see some thoughts on this, too.

If the photon is characterized by the surface of a sphere around its place/time of origin, it seems hard to imagine that this sphere does not encounter appropriate opportunities to absorb the photon rather early compared to millions of light years of expansion...

For each radial distance of sphere expansion, would not every photon sphere certainly encounter all possible absorption opportunities on that radius it in all directions ... after a few million years this opportunity for the surface to have an absorption encounter would seem to be high.

Photon spheres from the Sun would all encounter the planet Mercury - a whole planet of opportunities to absorb the photon. Eclipses and ordinary shadows seem to show that absorption is not a rare thing... so the expectation with an intervening planet might be that little or no photons make it as far as Earth; yet we know they do... and seem to make it a long way when coming from very distant sources, including cosmic ray background.

And, with the sphere concept, the absorption is only allowed to happen once at one point on its surface, which would seem to have to prevent the absorption at all other locations on that surface even if they are separated by millions of light years. In a sense, all locations on the sphere would seem to need to have a kind of "entanglement" behavior to ensure a single absorption event for the entire life of the surface.
 
  • #3
But there is no difference in the odds, bahamagreen. Whether you regard the photon as a complete spherical wavefunction or as a small confined object, there is equal unlikelihood that it is aimed so precisely that it hits the Earth from 100 light years away. And yes, it only gets to hit one thing, and there are many other things out there waiting to be hit. But every point you made in your argument against spheres applies just as well to the perilous adventures of a tiny discrete photon.

PS - Actually it's only a hemisphere. :wink: The photon presumably was emitted from the surface of the star, and only half the sky would be visible from there.
 
  • #4
bahamagreen said:
I'd like to see some thoughts on this, too.
And, with the sphere concept, the absorption is only allowed to happen once at one point on its surface, which would seem to have to prevent the absorption at all other locations on that surface even if they are separated by millions of light years.

In a sense, all locations on the sphere would seem to need to have a kind of "entanglement" behavior to ensure a single absorption event for the entire life of the surface.

How does Bill_K's argument - that the odds are the same whether the photon is considered to be a light sphere or considered to be a small confined object - resolve the issue that if the photon is a light sphere, then what stops the light sphere from collapsing at several points along its light sphere, once those surface points are separated by large distances?
 
  • #5
Bill_K,
I'm not seeing how the odds are the same.

If the photon is considered to be a small confined object, then it would need to be "...aimed so precisely that it hits the Earth from 100 light years away..."; but if it is regarded as an expanding sphere it will hit everything outside its point of origin in every direction without aiming until there is an absorption at one point on the surface of expansion.
How would you aim an expanding sphere, or need too?

And then that raises the question robinpike and I asked; how is the absorption restricted to a single event when the radius is very large?
 
  • #6
Two possible views:

Quantum mechanical, with collapse interpretation: Every photon is emitted in a wide angle as a wave function. Each object which gets hit by the wave function acts as a measurement - the wave function collapses to "hit that object (and deposited all its energy there)" or "did not hit that object (and continues as wide-spread wave function)". The probability that a photon hits your CCD is tiny, but as we have so many photons, some of them do.
The same can be done with all other interpretations of quantum mechanics.

Alternative approach: Assume that all photons have some direction, some of them will hit the CCD. That gives good results if you do not try to perform double-slit experiments or similar things.
 
  • #7
I maintain that the second approach does not happen. The nature of the photons depends on the process that creates them. Stars produce black body radiation, and black body radiation comes from random collisions between atoms. Each collision is predominantly electric dipole, and the radiation from it is therefore dipole-distributed over solid angle. To get a photon that is "aimed", with a narrowly defined k vector, you would need a large-L process.
 
  • #8
It does not happen, and as I posted it will lead to wrong predictions in interference effects (like a double slit). However, it gives the correct result if you just want to know how many photons will hit your CCD pixels.
 
  • #9
It seems like I preselected the answers by posting in the Quantum Physics group. Maybe I should have gone to the classical physics, because I would really like to get a qualitative idea about the E and the B field.

Bill_K says that one process to have created the photon is a dipole and
the radiation from it is therefore dipole-distributed over solid angle.
So can someone explain that last one with slightly more detail. I take it that "angle" means actually an inverted funnel with that opening angle. I further guess that the photon (E-B/ field ripple) emanates at the narrow end and over time travels out of the funnel while in the form of the cut of a spheres surface matching the "angle". As for the "thickness" of the surface, i.e. the radial distance within which the E-B field ripple is clearly nonzero, can we roughly say how much it is in units of the wavelength?

Or is my description completely wrong. How is it then?
 
  • #10
It seems like I preselected the answers by posting in the Quantum Physics group. Maybe I should have gone to the classical physics, because I would really like to get a qualitative idea about the E and the B field.
No, you came to the right place! It's important to remember that, first and foremost, a photon is a quantum mechanical object. We often get this question: "What does a photon really look like? What shape is it? How long? Is it just one cycle long, or N cycles long, etc." These are meaningful for classical wavetrains, but a single photon is described by a wavefunction that gives you a probability (amplitude) of finding a photon at a particular point. So the extent of the wavefunction is not the same as a region of E and B fields. Any more than the electron's orbit in a hydrogen atom tells you how big the electron is!
the radiation from it is therefore dipole-distributed over solid angle.
See the picture of electric dipole radiation in Wikipedia under "dipole antenna".
 
  • #11
Bill_K said:
... So the extent of the wavefunction is not the same as a region of E and B fields. ...

See the picture of electric dipole radiation in Wikipedia under "dipole antenna".

Well, but the E and B fields are what I am after. Are you saying that the Wikipedia picture qualitatively also describes a single photon?
 
  • #12
Well, but the E and B fields are what I am after.
I give up. Sorry.
 
  • #13
birulami said:
Well, but the E and B fields are what I am after. Are you saying that the Wikipedia picture qualitatively also describes a single photon?
birulami - your question is a good one and I understand your continued frustration. It may be a simple matter of certain respondents saying one thing and meaning another, but as it stands, one has the impression of a confusing and utterly nonsensical picture of a photon having on one presumably valid possible interpretation, a physically real extent of maybe billions of light-years in radius, magically undergoing instantaneous collapse down to a point at say some CCD detector. It needs to be clearly stated whether that possibly billions of light-years radial extent refers to a fantastically diluted spherical shell of physically real E & B fields, in keeping with at least roughly the field of a classical radiator, or it just refers to a point-like photon-as-particle, or at least spatially compact entity, whose probability of detection is described by said billions of light-years radius shell. Or something else gain.
 
  • #14
It may be a simple matter of certain respondents saying one thing and meaning another,
If you mean me, Q-reeus, I have said the same thing consistently throughout, and as clearly as I can. Sorry if you're confused.
a physically real extent of maybe billions of light-years in radius, magically undergoing instantaneous collapse down to a point
It is not "magical", just quantum mechanics. QM has other interpretations besides wavefunction collapse, but they amount to the same magic trick.
a fantastically diluted spherical shell of physically real E & B fields,
I have said it's a wavefunction, which measures probability. I include wavefunctions in things which are "physically real." And yes, a photon does have E and B fields, and yes, they are fantastically diluted a billion miles away, but they have meaning only at the point of detection. Trying to visualize them as a sphere's worth of classical EM field is totally incorrect.
 
  • #15
Bill_K said:
If you mean me, Q-reeus, I have said the same thing consistently throughout, and as clearly as I can. Sorry if you're confused.
Wasn't just you, but did have your comments in #7 in mind:
I maintain that the second approach does not happen. The nature of the photons depends on the process that creates them. Stars produce black body radiation, and black body radiation comes from random collisions between atoms. Each collision is predominantly electric dipole, and the radiation from it is therefore dipole-distributed over solid angle. To get a photon that is "aimed", with a narrowly defined k vector, you would need a large-L process.
Well that reads a lot like 'classical radiation field' to me. As I said last time, one needs to be very careful to clearly delineate if above does or does not imply a physically real classical or near-enough-to classical field distribution - and it sure seems to.
It is not "magical", just quantum mechanics. QM has other interpretations besides wavefunction collapse, but they amount to the same magic trick.
I'm a complete layman when it comes to subtleties of QM, but perhaps this is where the proper understanding of this 'magic trick' needs to be spelled out - layman's language of course.
I have said it's a wavefunction, which measures probability. I include wavefunctions in things which are "physically real." And yes, a photon does have E and B fields, and yes, they are fantastically diluted a billion miles away, but they have meaning only at the point of detection. Trying to visualize them as a sphere's worth of classical EM field is totally incorrect.
This leaves me confused as ever. On the one hand, you say spherical shell represents a wavefunction (= probability calculational tool?), on the other that there are physically real and fantastically diluted E & B fields = classical EM prediction for say dipole oscillator. So what is it then? Physically real instantaneous collapse of a billions of light years radius spherical pulse of real E & B fields, or just detection at a point [STRIKE]by[/STRIKE] of a point-like bundle of energy that was in fact always point-like? Not trying to be argumentative here btw - I am genuinely baffled.
 
Last edited:
  • #16
Q-reeus said:
So what is it then? Physically real instantaneous collapse of a billions of light years radius spherical pulse of real E & B fields, or just detection at a point by a point-like bundle of energy that was in fact always point-like? Not trying to be argumentative here btw - I am genuinely baffled.
It isn't (2) because the wavefunction was not always point-like.

It also isn't (1) because it's not the EM fields which collapse, it's the photon wavefunction. They are crucially different because the EM field is a observable measurement, whereas the wavefunction is not. It's not as if the EM field is suddenly concentrated to a point where the photon happens to be measured- it's always concentrated around the photon's position, and this position could be at many different points in space until it is measured.That's my understanding anyway. Everything must be thought of via the wavefunction, which alone collapses due to measurement by a physical operator. Until that collapse occurs, there have, by definition, been no measured EM fields due to the photon anywhere.
 
  • #17
MikeyW said:
It isn't (2) because the wavefunction was not always point-like.

It also isn't (1) because it's not the EM fields which collapse, it's the photon wavefunction. They are crucially different because the EM field is a observable measurement, whereas the wavefunction is not. It's not as if the EM field is suddenly concentrated to a point where the photon happens to be measured- it's always concentrated around the photon's position, and this position could be at many different points in space until it is measured.


That's my understanding anyway. Everything must be thought of via the wavefunction, which alone collapses due to measurement by a physical operator. Until that collapse occurs, there have, by definition, been no measured EM fields due to the photon anywhere.

Well, then you should first define, what you understand under the "photon's position". There is no observable in the strict sense of quantum theory which can be interpreted as "position" for massless particles with spin greater or equal to 1.

Also, there is no necessity whatsoever to believe in the collapse of a quantum state at the "instant" of measurement, which is anyway always a finite (uncertain) time interval.

Besides, what we measure as "light" from distant sources (particularly in the extreme case of a very distant galaxy) is almost never a single photon but a coherent state and thus a very large extended "wave packet" with an indetermined photon number (the dimmer the light is you want to detect the more it's well approximated by a superposition of the vacuum and a single-photon state).

You get a far better approximate intuitive idea about "light" from a distant source when thinking in terms of classical em. waves than in terms of thinking about "photons".
 
  • #18
MikeyW said:
It isn't (2) because the wavefunction was not always point-like.
Thanks for your input MikeyW, but I never claimed or implied the wavefunction was point-like. On the contrary that the 'wavefunction' (but hang on - isn't it a no-no for photon to even have a wavefunction?!) was 'spherical pulse-like' in predicting probability of detection of what was always a concentrated entity known as photon. For me one telling argument here is conservation of momentum. When a photon is absorbed say at that quintessential CCD detector, there is an an impulse dp = hf/c. It acts in a very specific direction - away from the source. If we wish that momentum conservation is preserved at all times, how can it be otherwise than that the source of that absorbed photon suffered an equal and opposite recoil momentum when emitting in the first place? Which is inconsistent with a spreading axially-symmetric classical dipole oscillator field, agreed?
It also isn't (1) because it's not the EM fields which collapse, it's the photon wavefunction. They are crucially different because the EM field is a observable measurement, whereas the wavefunction is not. It's not as if the EM field is suddenly concentrated to a point where the photon happens to be measured- it's always concentrated around the photon's position, and this position could be at many different points in space until it is measured.

That's my understanding anyway. Everything must be thought of via the wavefunction, which alone collapses due to measurement by a physical operator. Until that collapse occurs, there have, by definition, been no measured EM fields due to the photon anywhere
.
I agree with this part - without getting into issues about nature of 'wavefunction collapse'.
 
  • #19
vanhees71 said:
Well, then you should first define, what you understand under the "photon's position". There is no observable in the strict sense of quantum theory which can be interpreted as "position" for massless particles with spin greater or equal to 1.

Also, there is no necessity whatsoever to believe in the collapse of a quantum state at the "instant" of measurement, which is anyway always a finite (uncertain) time interval.

Besides, what we measure as "light" from distant sources (particularly in the extreme case of a very distant galaxy) is almost never a single photon but a coherent state and thus a very large extended "wave packet" with an indetermined photon number (the dimmer the light is you want to detect the more it's well approximated by a superposition of the vacuum and a single-photon state).
Is this just nit-picking? When I say position I mean the position of the CCD which records the photon, and when I say "instant", I mean at the time that we get the measurement signal. Do these minor alterations not remove your objections?

I would think any correct explanation necessarily uses quantum mechanics, and the classical analogue may be more or less useful as a rough explanation depending on the circumstances. For a solar panel facing the sun, I'd say the classical explanation is fine. For a very distant star emitting black-body radiation, I would say the classical explanation can lead to some big problems, not least because the two fundamental processes (BB radiation and the CCD's photoelectric effect) are strictly non-classical.

In this classical explanation, do we have a huge (hemi)sphere of EM waves which decrease with 1/r^2? What is the rough value of the EM field when a visible light from a yellow giant travels a billion light years? How can a CCD possibly detect such a small signal over the instrument noise?
 
  • #20
vanhees71 said:
Besides, what we measure as "light" from distant sources (particularly in the extreme case of a very distant galaxy) is almost never a single photon but a coherent state and thus a very large extended "wave packet" with an indetermined photon number

Typically it will be in a thermal state, but that will of course only extend the indeterminacy of the photon number.

MikeyW said:
For a solar panel facing the sun, I'd say the classical explanation is fine. For a very distant star emitting black-body radiation, I would say the classical explanation can lead to some big problems, not least because the two fundamental processes (BB radiation and the CCD's photoelectric effect) are strictly non-classical.

Well, BB radiation needs QM to describe the frequency distribution, but is that really important here? The photoelectric effect does not need a quantized em field. It works fine with a classical field. Therefore antibunching, not the photoelectric effect. is the unambiguous proof of single photons existing under some circumstances.

MikeyW said:
In this classical explanation, do we have a huge (hemi)sphere of EM waves which decrease with 1/r^2? What is the rough value of the EM field when a visible light from a yellow giant travels a billion light years? How can a CCD possibly detect such a small signal over the instrument noise?

Why not? some EMCCDs already have single photon sensitivity at a really low dark count. Besides, localizing the light field is not that easy. All photons inside one coherence volume are indistinguishable in principle. Far away from the source the coherence volume gets larger. Therefore the coherence volume can be quite large. Hanbury Brown and Twiss managed to measure the angular diameter of Sirius this by measuring the coherence length of its emission that way already in the fifties. Then, the coherence volume was at least large than two large telescopes.
 
  • #21
Cthugha said:
Why not? some EMCCDs already have single photon sensitivity at a really low dark count. Besides, localizing the light field is not that easy. All photons inside one coherence volume are indistinguishable in principle. Far away from the source the coherence volume gets larger. Therefore the coherence volume can be quite large. Hanbury Brown and Twiss managed to measure the angular diameter of Sirius this by measuring the coherence length of its emission that way already in the fifties. Then, the coherence volume was at least large than two large telescopes.
All that may be perfectly correct when it comes to detecting light from something like a distant star outputting an enormous and incoherent flux of photons. It still does not deal with the issue of emission and absorption of a single photon, having energy E = hf, and momentum p = khf/c. Whether or not emission is a random process, for any given such event, local conservation of momentum requires the emitter receive a kick of precisely -khf/c. After a journey that may last billions of years, an absorption event occurs and the absorber receives a kick of +khf'/c (redshifted somewhat owing to Hubble expansion - f>f' - but we assume 'the universe' absorbs the difference somehow). Over mere millions of light-years travel, the difference f-f' is negligible. Point is, requiring conservation of momentum hold, one cannot have that the emitted photon expands as either a classical EM style axially symmetric continuous EM field, or equivalent wavefunction description, capable of collapsing randomly to a single detection event that could be in any direction. Sans issues like gravitational deflection, conservation of momentum demands emission and absorption events are linked as equal and opposite impulse events.

The one imo bizarre out to this is afaik to invoke retrocausality/superdeterminism. Emission is then described by a spherical type wavefunction, absorption is 'random' in direction, but once established, or rather having been pre-established from presumably beginning of universe, future absorption event conspires across space and time with past emission event to ensure conservation laws are respected. Seems awfully strained. Simpler and saner imo that emission followed by absorption was that of a point-like entity known as photon that actually existed and traveled between these two events. Or something else makes sense that avoids point-like photon model?
 
Last edited:
  • #22
Q-reeus said:
All that may be perfectly correct when it comes to detecting light from something like a distant star outputting an enormous and incoherent flux of photons. It still does not deal with the issue of emission and absorption of a single photon, having energy E = hf, and momentum p = khf/c. Whether or not emission is a random process, for any given such event, local conservation of momentum requires the emitter receive a kick of precisely -khf/c.

That depends strongly on the scenario you have in mind. Fortunately (or unfortunately) single photon emitters are rare and it is quite uncommon to encounter something really emitting a single photon.

For a single emitter there will of course be recoil. The corresponding recoil velocity is on the order of 3 m/s for a hydrogen atom (Lyman alpha line) and it is on the order of 3 cm/s for a sodium atom. That already makes it clear that for large/heavy emiters the recoil drowns in uncertainty. This is also why mirrors do not cause collapse. Recoil is also comparably tiny for pretty much every practically relevant light emitter in everyday life: The sun, streetlights, other suns far away and so on.

Now if you would indeed use hydrogen as a single photon source, you will most likely just observe a really short coherence time of the emission. If you want to describe the system correctly, it will most likely be in a superposition state. You may imagine it as being entangled to some degree due to - as you pointed out - momentum conservation. However, for such a system, the part which will cause collapse is most likely not the photon, but the atom as it interacts more strongly with its surroundings. The tail does not wag the dog, so to speak.
 
  • #23
Cthugha said:
That depends strongly on the scenario you have in mind. Fortunately (or unfortunately) single photon emitters are rare and it is quite uncommon to encounter something really emitting a single photon.

For a single emitter there will of course be recoil. The corresponding recoil velocity is on the order of 3 m/s for a hydrogen atom (Lyman alpha line) and it is on the order of 3 cm/s for a sodium atom. That already makes it clear that for large/heavy emiters the recoil drowns in uncertainty. This is also why mirrors do not cause collapse.
Cannot follow the logic here. If the mirror, perhaps weighing tons, has a carbon blacked surface, there is strong likelihood of absorption, having virtually nothing to do with mirror mass, and everything to do with atomic/molecular composition. No?
Recoil is also comparably tiny for pretty much every practically relevant light emitter in everyday life: The sun, streetlights, other suns far away and so on.
Not sure it follows like so. There are a huge number of individual emission events going on in these entities. For any particular one such event, how would the mass of a star be involved, rather than just that of say two colliding ions? But supposing a star's mass somehow did dramatically reduce recoil for all such events. Apart from that reducing line-broadening, how does it affect argument re conservation of momentum for any given emission event? We still have linearity/superposition principle holding good, right?
Now if you would indeed use hydrogen as a single photon source, you will most likely just observe a really short coherence time of the emission. If you want to describe the system correctly, it will most likely be in a superposition state. You may imagine it as being entangled to some degree due to - as you pointed out - momentum conservation. However, for such a system, the part which will cause collapse is most likely not the photon, but the atom as it interacts more strongly with its surroundings. The tail does not wag the dog, so to speak.
This part is probably getting to the crux of it but is also unclear to me. Are you saying in effect that absorption event is random in impulse sense - owing to environment deciding how absorption occurs? What will that do to overall conservation of momentum? I don't see how that impulse khf/c being in some way partitioned between an absorbing atom and remainder of environment in any way alters the argument about need for emission/absorption of a point-like photon entity. Otherwise - some kind of imo weird retrocausality scenario before mentioned seems inevitable. Not that I know much here.
 
Last edited:
  • #24
Q-reeus said:
Cannot follow the logic here. If the mirror, perhaps weighing tons, has a carbon blacked surface, there is strong likelihood of absorption, having virtually nothing to do with mirror mass, and everything to do with atomic/molecular composition. No?

Eh? I do not get your point. Take a simple Mach-Zehnder-interferometer. You have one miroor in each path before combining the beam again, each deflecting the beam by 90 degrees. Therefore one could try to measure which path a single photon sent through the interferometer took by watching the mirror recoil. Nevertheless you cannot get which way information this way as the tiny recoil does not place the emirror into a new eigenstate orthogonal to the old one. Just draw the phase space representation of the state with the uncertainties. Assume a minimum uncertainty state for simplicity, maybe one where the uncertainties are distributed evenly. Draw the same state for slightly shifted momentum and check the overlap. I mean it is not like this emission process consitutes a measurement of momentum or will put a whole star into a momentum eigenstate.

Q-reeus said:
Not sure it follows like so. There are a huge number of individual emission events going on in these entities. For any particular one such event, how would the mass of a star be involved, rather than just that of say two colliding ions?

Hmm, are you familiar with very basic solid state physics? Because SSP is all about such stuff, especially photons. Whether you have collective excitations of a large system or a single ion needs to take the whole recoil makes a huge difference. Of course stars are somewhat different to treat than solids, but some basic principles apply there, too.

Q-reeus said:
But supposing a star's mass somehow did dramatically reduce recoil for all such events. Apart from that reducing line-broadening, how does it affect argument re conservation of momentum for any given emission event? We still have linearity/superposition principle holding good, right?

It does not reduce recoil, but the whole system recoils. The Mössbauer effect which got Mössbauer the Nobel prize in 1961 relies on that fact. You may want to read up on it. It is very interesting.

Q-reeus said:
This part is probably getting to the crux of it but is also unclear to me. Are you saying in effect that absorption event is random in impulse sense - owing to environment deciding how absorption occurs? What will that do to overall conservation of momentum? I don't see how that impulse khf/c being in some way partitioned between an absorbing atom and remainder of environment in any way alters the argument about need for emission/absorption of a point-like photon entity. Otherwise - some kind of imo weird retrocausality scenario before mentioned seems inevitable.

No, all I wanted to say was that for VERY light emitters emitting single photons, the photon detection probability for a single photon at a very large distance will not necessarily be uniformly distributed (additionally considering conservation of energy already complicates that). The system of emitter and photon must conserve momentum and the momentum change of the emitter might be large enough that it actually changes its state. So even if it was a light composite emitter (making conservation of energy easier) and you get some entangled situation for some time, where emitter and photon are entangled and only the moment amount transferred is fixed, the first interaction of either the photon or the emitter will decohere the system and leave both with well defined position. And it is very, very likely that the emitter will interact first.
However, even in such a situation the distribution of the photon probability distribution ( as evidenced by repeating the emission process several times) would still look completely isotropic unless one also checks the state of the emitter after decoherence has taken place and compares these results with the photon detection positions. Then one would see antocorrelations.
 
  • #25
Cthugha said:
Eh? I do not get your point. Take a simple Mach-Zehnder-interferometer. You have one miroor in each path before combining the beam again, each deflecting the beam by 90 degrees. Therefore one could try to measure which path a single photon sent through the interferometer took by watching the mirror recoil. Nevertheless you cannot get which way information this way as the tiny recoil does not place the emirror into a new eigenstate orthogonal to the old one. Just draw the phase space representation of the state with the uncertainties. Assume a minimum uncertainty state for simplicity, maybe one where the uncertainties are distributed evenly. Draw the same state for slightly shifted momentum and check the overlap. I mean it is not like this emission process consitutes a measurement of momentum or will put a whole star into a momentum eigenstate.
So you were really talking about the act of taking actual measurements to determine single-photon recoil spoiling things, whereas I was on about just the principle of momentum linkage tying emission event to absorption event. I wasn't arguing with uncertainty principle.
Hmm, are you familiar with very basic solid state physics? Because SSP is all about such stuff, especially photons. Whether you have collective excitations of a large system or a single ion needs to take the whole recoil makes a huge difference. Of course stars are somewhat different to treat than solids, but some basic principles apply there, too.
I am quite familiar with the concept though not details of collective excitations in solids - phonons, plasmons, polaritons etc. What does that have in common with your example of a typical star? Are you seriously suggesting that such a super-heated assembly of chaotically moving ions and nuclei, collectively bound solely by an overall gravity, can in any way be treated the same way as a room temperature piece of solid matter? There is for sure e.g. overall rotation, 'breathing' and other collective oscillation modes, convective turbulence at various scales, including magnetically linked prominences, flares etc. None of these though are even remotely similar or connected to a star-wide collective recoil phenomenon linked to photon emission as you suggested. Even in extreme case of a neutron star, it is generally believed superfluidity/superconductivity is restricted to interior region, and at surface where emissions to outside is possible, there is a solid crust that is not in superfluid/superconducting state. Could be wrong though.
It does not reduce recoil, but the whole system recoils. The Mössbauer effect which got Mössbauer the Nobel prize in 1961 relies on that fact. You may want to read up on it. It is very interesting.
I'm familiar with as you say the interesting concept of Mossbauer effect - but afaik that is a somewhat rare if not entirely unique collective phenomena restricted to a solid - Cobalt57/Iron57 alloy, and afaik is not even exhibited by vast majority of other solids, let alone an ill-defined mostly-plasma entity like a star (ill-defined in that there is a continual streaming away of radiant energy and solar wind particles in a semi-chaotic manner, thus 'star' has by nature an ill-defined boundary). Checked out HyperPhysics.com site http://hyperphysics.phy-astr.gsu.edu/hbase/nuclear/mossfe.html
Mossbauer found that if he cooled the emitter, you could reach a condition where the emitting nucleus could not recoil by itself. Qualitatively, the reason is that at sufficiently low temperatures an atom in a crystal lattice cannot recoil individually. The quantization of the vibrational states of the lattice causes the energy of recoil to be absorbed by the lattice as a whole.

It was a great breakthrough to realize that you could get resonance absorption of gamma rays by putting the source nuclei in a crystal and cooling it. To see how many iron nuclei would have to recoil together to keep the gamma within the natural linewidth:

Compared to Avogadro's number, that's not very many. In fact, it is a speck of matter too small to be seen in an optical microscope. It would follow that any tiny crystal within a cobalt-57-containing piece of iron would meet the conditions for resonance absorption if cooled sufficiently." (bold emphasis added)
So even in a solid of special composition, cooling to low temperature is required just to achieve the low but necessary proportion of nuclei that need to act collectively re recoil. I believe to argue from such a delicate and rare solid-state situation to that a star, having zero lattice structure, can at all be similarly a quantum mechanically governed collective-recoil entity is, to put it mildly, stretching things. Sure, eventually any individual recoil event will be thermally/gravitationally absorbed into the whole, but by that lengthy and ill-defined time, the culprit has surely long escaped with characteristics set by the very local collision/emission event. Is not thermal line-broadening in stellar spectra a well recognized phenomenon?
No, all I wanted to say was that for VERY light emitters emitting single photons, the photon detection probability for a single photon at a very large distance will not necessarily be uniformly distributed (additionally considering conservation of energy already complicates that). The system of emitter and photon must conserve momentum and the momentum change of the emitter might be large enough that it actually changes its state. So even if it was a light composite emitter (making conservation of energy easier) and you get some entangled situation for some time, where emitter and photon are entangled and only the moment amount transferred is fixed, the first interaction of either the photon or the emitter will decohere the system and leave both with well defined position. And it is very, very likely that the emitter will interact first.
However, even in such a situation the distribution of the photon probability distribution ( as evidenced by repeating the emission process several times) would still look completely isotropic unless one also checks the state of the emitter after decoherence has taken place and compares these results with the photon detection positions. Then one would see antocorrelations.
The single serious argument I can take from all this has to do then with above matter of entanglement followed by decoherence. And that very likely random environmental disruption is at emitter end of things. From this it presumably follows that since photon emission has no meaning until decoherence, and since decoherence is random in nature, there is no way to argue for emission of point-like entity with a definite if unknown momentum. That about it? I don't know to which QM interpretational school you belong to, but I suspect it's one where a photon has no existence, or at least no properly defined notion of existence, between emission and absorption. Correct?

Suppose we have a collimated source of otherwise random thermal radiation - say from a tungsten filament lamp. And that the source is extremely attenuated via a filter near the source. Such that within a given frequency band, single-photon emission is likely over an interval easily handled by a distant sufficiently sensitive detector. I presume you will agree that such a distant detector/absorber must be oriented within a certain arbitrarily narrow angular range (depending on degree of collimation) about collimated source in order to have reasonable probability of detecting such single-photon events. And that direction of net impulse received by detector will over time be completely consistent with drawing a straight line between emitter and detector - i.e. collector, if alone in deep space say, gradually acquires a drift velocity directly away from collimated source. How would this situation be reconciled with photon-as-spherically-expanding probability wave?
 
Last edited:
  • #26
Q-reeus said:
So you were really talking about the act of taking actual measurements to determine single-photon recoil spoiling things, whereas I was on about just the principle of momentum linkage tying emission event to absorption event. I wasn't arguing with uncertainty principle.

No, but I was arguing with it. That is a very central point you do not seem to grasp.

Q-reeus said:
I am quite familiar with the concept though not details of collective excitations in solids - phonons, plasmons, polaritons etc. What does that have in common with your example of a typical star? Are you seriously suggesting that such a super-heated assembly of chaotically moving ions and nuclei, collectively bound solely by an overall gravity, can in any way be treated the same way as a room temperature piece of solid matter?

I said that collective excitations are present. Nothing more and nothing less. You need some collective excitation that can take away arbitrarily small amounts of momentum to catch up the momentum. In a solid these are typically phonons. The closest thing in stars is typically something like density waves, maybe something similar to plasmons. These of course do not have particle nature as this is not a well structured lattice, but a rather chaotic situation, but the important thing is that they can carry momentum.

Q-reeus said:
There is for sure e.g. overall rotation, 'breathing' and other collective oscillation modes, convective turbulence at various scales, including magnetically linked prominences, flares etc. None of these though are even remotely similar or connected to a star-wide collective recoil phenomenon linked to photon emission as you suggested.

The emission from a star is in many cases very collective, even when not taking density waves into account, when one considers that the stars are typically not transparent for all kinds of light they emit. There may be significant reabsorption which also often leads to collective phenomena. However, one can of course not generalize that for each and every star.

Q-reeus said:
Even in extreme case of a neutron star, it is generally believed superfluidity/superconductivity is restricted to interior region, and at surface where emissions to outside is possible, there is a solid crust that is not in superfluid/superconducting state. Could be wrong though.

What does that have to do with the topic at hand?

Q-reeus said:
I'm familiar with as you say the interesting concept of Mossbauer effect - but afaik that is a somewhat rare if not entirely unique collective phenomena restricted to a solid - Cobalt57/Iron57 alloy, and afaik is not even exhibited by vast majority of other solids, let alone an ill-defined mostly-plasma entity like a star (ill-defined in that there is a continual streaming away of radiant energy and solar wind particles in a semi-chaotic manner, thus 'star' has by nature an ill-defined boundary). Checked out HyperPhysics.com site http://hyperphysics.phy-astr.gsu.edu/hbase/nuclear/mossfe.html

So even in a solid of special composition, cooling to low temperature is required just to achieve the low but necessary proportion of nuclei that need to act collectively re recoil. I believe to argue from such a delicate and rare solid-state situation to that a star, having zero lattice structure, can at all be similarly a quantum mechanically governed collective-recoil entity is, to put it mildly, stretching things.

Of course you need to cool it. The iron is an "alien species" in the lattice. And of course there are not many examples as there are not many gamma emitters. Mentioning Mössbauer was basically for practical reasons because his results are easy to find and well understandable. The optical analogon is basically the zero-phonon linbe which is very commonly found.

Q-reeus said:
Sure, eventually any individual recoil event will be thermally/gravitationally absorbed into the whole, but by that lengthy and ill-defined time, the culprit has surely long escaped with characteristics set by the very local collision/emission event. Is not thermal line-broadening in stellar spectra a well recognized phenomenon?

What makes you think it is lengthy? This is of course a question of density, temperature and other things. And of course there is thermal line broadening. What is the point?

Q-reeus said:
The single serious argument I can take from all this has to do then with above matter of entanglement followed by decoherence. And that very likely random environmental disruption is at emitter end of things. From this it presumably follows that since photon emission has no meaning until decoherence, and since decoherence is random in nature, there is no way to argue for emission of point-like entity with a definite if unknown momentum. That about it?

This has not much to do with what I have written.

Q-reeus said:
I don't know to which QM interpretational school you belong to, but I suspect it's one where a photon has no existence, or at least no properly defined notion of existence, between emission and absorption. Correct?

As I do semiconductor quantum optics for a living, I am following "shut up and calculate". But any approach using spatially well-defined realistic bullet-like photons has led to contradictions in two-photon interference experiments (unless you include Bohmian mechanics, where you split things into the photon and the guiding wave).

Q-reeus said:
Suppose we have a collimated source of otherwise random thermal radiation - say from a tungsten filament lamp. And that the source is extremely attenuated via a filter near the source. Such that within a given frequency band, single-photon emission is likely over an interval easily handled by a distant sufficiently sensitive detector.

One initial point. It may sound like nitpicking, but it is a basic issue of quantum optics. Attenuating a light source does never give you single photons. Having single photons will mean you get a Fock state with a photon number variance of zero. This is impossible using attenuation. You can get one photon on average, but that is a huge difference in experiments.

The more important question is: is the emission isotropic initially? Then you can obviously not collimate it completely. The complete solid angle you can collect depends on the NA of your lens.

Q-reeus said:
I presume you will agree that such a distant detector/absorber must be oriented within a certain arbitrarily narrow angular range (depending on degree of collimation) about collimated source in order to have reasonable probability of detecting such single-photon events. And that direction of net impulse received by detector will over time be completely consistent with drawing a straight line between emitter and detector - i.e. collector, if alone in deep space say, gradually acquires a drift velocity directly away from collimated source. How would this situation be reconciled with photon-as-spherically-expanding probability wave?

Why would anyone assume a collimated light beam to be associated with an isotropic spherically expanding probability wave? You collimated it. Of course there will be spreading over large distances, but I absolutely do not see your point.

Anyway, I think you are muddying the waters. My point was simply: The mass of the emitter matters. Where is your problem with that statement?
 
Last edited:
  • #27
Cthugha said:
No, but I was arguing with it. That is a very central point you do not seem to grasp.
How so? I always acknowledged emission in general may be of random nature, and never claimed it was possible to accurately measure a given single-photon emission event.
I said that collective excitations are present. Nothing more and nothing less.
Actually imo your comment in #24 was saying quite a bit more:
Q-reeus: "Not sure it follows like so. There are a huge number of individual emission events going on in these entities. For any particular one such event, how would the mass of a star be involved, rather than just that of say two colliding ions?"

Hmm, are you familiar with very basic solid state physics? Because SSP is all about such stuff, especially photons. Whether you have collective excitations of a large system or a single ion needs to take the whole recoil makes a huge difference. Of course stars are somewhat different to treat than solids, but some basic principles apply there, too.
And that followed on from #22 where you implied recoil from emission processes in a star was far smaller than for case of isolated ionic/atomic emission. At stellar surface where emissions capable of escaping to outside occur, gas densities are typically quite low. How on Earth can there be some effective collective phenomenon that dramatically reduces recoil relative to that for an isolated collision/emission process? But I will comment more on that later.
You need some collective excitation that can take away arbitrarily small amounts of momentum to catch up the momentum. In a solid these are typically phonons. The closest thing in stars are typically density waves.
What kind of density waves - which to be relevant must act to generally and dramatically suppress recoil of individual ionic/atomic emissions? And how do such remotely relate to solid-state phenomena where e.g. band structure actually exists, unlike in a star? Can you cite a reference - including specific parts of the same, that back this up?

Q-reeus: "Even in extreme case of a neutron star, it is generally believed superfluidity/superconductivity is restricted to interior region, and at surface where emissions to outside is possible, there is a solid crust that is not in superfluid/superconducting state. Could be wrong though."

What does that have to do with the topic at hand.
Well you vaguely linked stellar behavour re recoil suppression with that in vastly different solid-state physics setting. So I simply considered the one stellar example I could think of where collective QM phenomenon not only exists but is dominant. Utterly different to a normal star of course - but just barely qualifying for that label 'star'.
Q-reeus: "Sure, eventually any individual recoil event will be thermally/gravitationally absorbed into the whole, but by that lengthy and ill-defined time, the culprit has surely long escaped with characteristics set by the very local collision/emission event. Is not thermal line-broadening in stellar spectra a well recognized phenomenon?"

What makes you think it is lengthy? This is of course a question of density, temperature and other things. And of course there is thermal line broadening. What is the point?
Being no expert on stellar atmospheres, I did a search and found this article
As I suspected, Doppler broadening owing to thermal motions of individual ions at the ~ 5772K for solar photosphere totally dominates by factor of ca 103 over other contributors, including uncertainty principle, natural broadening, collisional broadening. Thus stellar atmospheric ions/atoms and their emissions behave as I expected - individual entities subject to random thermal encounters in a relatively dilute environment of ~ 1.5*1023 ions/m3. That's the point - no indication whatsoever of any collective recoil suppressing mechanism a la Mossbauer. And indeed how could there be anything remotely solid-state like operative in a stellar atmosphere where the notion of say band-structure is absurd. Makes no sense to me.
Q-reeus: "The single serious argument I can take from all this has to do then with above matter of entanglement followed by decoherence. And that very likely random environmental disruption is at emitter end of things. From this it presumably follows that since photon emission has no meaning until decoherence, and since decoherence is random in nature, there is no way to argue for emission of point-like entity with a definite if unknown momentum. That about it?"

This has not much to do with what I have written.
Then the bit about entanglement, decoherence, and emitter vs photon was making what argument exactly?
But any approach using spatially well-defined realistic bullet-like photons has led to contradictions in two-photon interference experiments (unless you include Bohmian mechanics, where you split things into the photon and the guiding wave).
Quite aware 'bullet-like' won't work, but de Broglie-Bohm approach may indeed have something going for it. I'm somewhat agnostic.
One initial point. It may sound like nitpicking, but it is a basic issue of quantum optics. Attenuating a light source does never give you single photons. Having single photons will mean you get a Fock state with a photon number variance of zero. This is impossible using attenuation. You can get one photon on average, but that is a huge difference in experiments.
OK fair point. So I meant one photon on average.
The more important question is: is the emission isotropic initially? Then you can obviously not collimate it completely. The complete solid angle you can collect depends on the NA of your lens.
Of course, but one can get a quite narrow solid angle.
Why would anyone assume a collimated light beam to be associated with an isotropic spherically expanding probability wave? You collimated it. Of course there will be spreading over large distances, but I absolutely do not see your point.
Point is, if photon is a real point-like entity, collimation at source end and subsequent arrival with tight angular distribution at receptor end is a natural expectation. If on the other hand photon is spreading wave - either real E & B fields or just as probability wavefunction, why would it not spread, Huygens-like, as spherical wavefront, once past collimator? In which case at large distance from source a near isotropic angular detection probability should apply. But won't. I chose to avoid laser as example of highly directional beam for the reason that interference there can well agree with photon-as-spreading-wave model.
Anyway, I think you are muddying the waters. My point was simply: The mass of the emitter matters. Where is your problem with that statement?
Nothing if it just relates to uncertainties in emission process itself. I have had no problem with that e.g. atomic emissions are subject to uncertainty relation. But that is not the issue. Anyway, just what is your idea of 'photon', if anything more than 'shut up and calculate'?
 
  • #28
Q-reeus said:
How so? I always acknowledged emission in general may be of random nature, and never claimed it was possible to accurately measure a given single-photon emission event.

Yes, but that has nothing to do with my argument. Again in a nutshell: How much the state of an emitter changes depends on its mass. The smaller the mass, the larger the change of the momentum relative to the momentum uncertainty state the emitter was initially in.

Q-reeus said:
Actually imo your comment in #24 was saying quite a bit more:

You cite here the point where I ask whether you know about collective excitations. One never knows the background knowledge discussion partners on these forums have (and I still do not) so I asked for collective excitations and gave an example where they are dominant. The idea that I want to treat a star like a solid was yours.

Q-reeus said:
And that followed on from #22 where you implied recoil from emission processes in a star was far smaller than for case of isolated ionic/atomic emission. At stellar surface where emissions capable of escaping to outside occur, gas densities are typically quite low. How on Earth can there be some effective collective phenomenon that dramatically reduces recoil relative to that for an isolated collision/emission process? But I will comment more on that later.

I do not imply recoil is smaller, but that the change in momentum is smaller. this is only a question of the mass or the number of particles taking part in interactions. This number is necessarily larger even for a typical gas than for an isolated emitter.


Q-reeus said:
What kind of density waves - which to be relevant must act to generally and dramatically suppress recoil of individual ionic/atomic emissions? And how do such remotely relate to solid-state phenomena where e.g. band structure actually exists, unlike in a star? Can you cite a reference - including specific parts of the same, that back this up?

I said before the difference to a solid is that these waves do not have particle character. You need the lattice to get that. Otherwise the requirements are obviously similar: You need some kind of excitation that already exists at small k. What that means for a star is difficult to say as it depends on the star. You may cover the necessary physics in terms of disordered media (see e.g. "Phonon sidebands of ordered and disordered media: naphthalene crystals and molecularly doped polymers", J. Phys. Chem., 1989, 93 (5), pp 1677–1680) - and no, I do not claim these are stars - or in terms of Langmuir waves like in plasmas (e.g. Propagation of Langmuir waves in an inhomogeneous plasma, AIP Conference Proceedings, Volume 159, pp. 456-459 (1987).)

Q-reeus said:
Well you vaguely linked stellar behavour re recoil suppression with that in vastly different solid-state physics setting. So I simply considered the one stellar example I could think of where collective QM phenomenon not only exists but is dominant. Utterly different to a normal star of course - but just barely qualifying for that label 'star'.

I linked it to give a good example for the importance of collective behavior in emission processor. I did not call it a model system for a star.

Q-reeus said:
Being no expert on stellar atmospheres, I did a search and found this article
As I suspected, Doppler broadening owing to thermal motions of individual ions at the ~ 5772K for solar photosphere totally dominates by factor of ca 103 over other contributors, including uncertainty principle, natural broadening, collisional broadening. Thus stellar atmospheric ions/atoms and their emissions behave as I expected - individual entities subject to random thermal encounters in a relatively dilute environment of ~ 1.5*1023 ions/m3. That's the point - no indication whatsoever of any collective recoil suppressing mechanism a la Mossbauer. And indeed how could there be anything remotely solid-state like operative in a stellar atmosphere where the notion of say band-structure is absurd. Makes no sense to me.

Ok, but I never claimed that. By the way pointing to thermal broadening is a red herring. You would not see whether the recoil is taken by one emitter alone or by the collective in this manner in the broadening.

Q-reeus said:
Then the bit about entanglement, decoherence, and emitter vs photon was making what argument exactly?

Now you are trying to kid me, no? If you can write me exactly what point of the argument was complicated to understand, I can of course rephrase, but at the moment I cannot see how to explain it better.

Q-reeus said:
Quite aware 'bullet-like' won't work, but de Broglie-Bohm approach may indeed have something going for it. I'm somewhat agnostic.

Maybe, maybe not.

Q-reeus said:
OK fair point. So I meant one photon on average.

Ok

Q-reeus said:
Of course, but one can get a quite narrow solid angle.

Yes.

Q-reeus said:
Point is, if photon is a real point-like entity, collimation at source end and subsequent arrival with tight angular distribution at receptor end is a natural expectation. If on the other hand photon is spreading wave - either real E & B fields or just as probability wavefunction, why would it not spread, Huygens-like, as spherical wavefront, once past collimator?

Now I am really puzzled. It will spread slightly as described by the Huygens-Fresnel propagator. Whether that really makes a difference depends on how far it travels.

Q-reeus said:
In which case at large distance from source a near isotropic angular detection probability should apply. But won't. I chose to avoid laser as example of highly directional beam for the reason that interference there can well agree with photon-as-spreading-wave model.

Well at REALLY large distances it will be somewhat isoropic again, but that is as mentioned above basic diffraction. What exactly is your point with this example. I feel like I am missing or misunderstanding something here.

Q-reeus said:
Nothing if it just relates to uncertainties in emission process itself. I have had no problem with that e.g. atomic emissions are subject to uncertainty relation. But that is not the issue.

Well, the recoil is very intimately linked to momentum uncertainty. That is part of the issue I think.

Q-reeus said:
Anyway, just what is your idea of 'photon', if anything more than 'shut up and calculate'?

Basically it is SUAC. But I like Mermin's point of views where photon detection events (not photons, but at least experimentally accessible) are considered in terms of field correlations. It is expressed in a humorous manner in the following quote:

"My complete answer to the late 19th century question "what is electrodynamics trying to tell us?" would simply be this: Fields in empty space have physical reality; the medium that supports them does not.

Having thus removed the mystery from electrodynamics, let me immediately do the same for quantum mechanics: Correlations have physical reality; that which they correlate, does not."
(N. David Mermin, from "What is Quantum Mechanics Trying to Tell Us?")
 
  • #29
Cthugha said:
Yes, but that has nothing to do with my argument. Again in a nutshell: How much the state of an emitter changes depends on its mass. The smaller the mass, the larger the change of the momentum relative to the momentum uncertainty state the emitter was initially in.
Right well missing it somehow because all I got from that was emission frequency is somewhat reduced for a low mass emitter.
I do not imply recoil is smaller, but that the change in momentum is smaller.
There is a distinction? I understood the terms as synonymous. Maybe you mean recoil velocity, which then makes more sense.
this is only a question of the mass or the number of particles taking part in interactions. This number is necessarily larger even for a typical gas than for an isolated emitter.
My recollection, somewhat hazy, is that gas particle collisions are rarely more than one-on-one. Expected since collision cross-sections are typically very small to start with.
Ok, but I never claimed that. By the way pointing to thermal broadening is a red herring. You would not see whether the recoil is taken by one emitter alone or by the collective in this manner in the broadening.
If taken by collective - at the instant of recoil, how could it not be, a la Mossbauer, that velocity of recoil is proportionately reduced - hence reduced line broadening?
Now you are trying to kid me, no? If you can write me exactly what point of the argument was complicated to understand, I can of course rephrase, but at the moment I cannot see how to explain it better.
Reading through again your #24, I now think you were saying conundrum of possibly billions of light-years retrocausality I raised is moot because in practice decoherence intervenes very early on, establishing definiteness to all participants in emission event?
Q-reeus: "Point is, if photon is a real point-like entity, collimation at source end and subsequent arrival with tight angular distribution at receptor end is a natural expectation. If on the other hand photon is spreading wave - either real E & B fields or just as probability wavefunction, why would it not spread, Huygens-like, as spherical wavefront, once past collimator?"

Now I am really puzzled. It will spread slightly as described by the Huygens-Fresnel propagator. Whether that really makes a difference depends on how far it travels.
But usual Huygens-Fresnel diffraction pattern assumes continuous wave-train. If single photon propagates as a spherical pulse, in my book one should have roughly spherical diffraction past point of collimator aperture, depending on exact spectrum of pulse. For me this is all nonsensical to start with since it implied that much of a photon was lost to collimation to begin with, a contradiction of the assumption of photon quantization.
Q-reeus: In which case at large distance from source a near isotropic angular detection probability should apply. But won't. I chose to avoid laser as example of highly directional beam for the reason that interference there can well agree with photon-as-spreading-wave model.

Well at REALLY large distances it will be somewhat isoropic again, but that is as mentioned above basic diffraction. What exactly is your point with this example. I feel like I am missing or misunderstanding something here.
See previous remarks.
Basically it is SUAC. But I like Mermin's point of views where photon detection events (not photons, but at least experimentally accessible) are considered in terms of field correlations. It is expressed in a humorous manner in the following quote:

"...Having thus removed the mystery from electrodynamics, let me immediately do the same for quantum mechanics: Correlations have physical reality; that which they correlate, does not." (N. David Mermin, from "What is Quantum Mechanics Trying to Tell Us?")
Ha ha. :biggrin: And thus is everything now clear. I suppose my main reason for opting for point-like photon gets back to an earlier thread where a respected participant pushed the photon as spreading EM wave model, and basically a modern version of Planck's loading theory was used to justify simultaneously photoelectric effect and double-slit interference pattern.
Got me thinking about limiting situation of extremely weak emission coupled with small and cryogenically chilled detection screen. That limit seemed to definitely rule out the notion that energy in screen lattice could gradually accumulate owing to successive partial energy dumps by individual quanta, as energy extraction rate from such dumps would greatly exceed any accumulation rate. Hence threshold energy, required in loading theory, never really arrives, or is at least fantastically unlikely. Photoemission would present no conceptual problem for point-like photon model as energy deposits in one quick go. So photoemission/detection rate should here be very model dependent imo. Not aware of any experiments testing that regime though.
 
Last edited:
  • #30
Q-reeus said:
There is a distinction? I understood the terms as synonymous. Maybe you mean recoil velocity, which then makes more sense.

Mea culpa. That was a typo. I wanted to express that either recoil momentum per particle is reduced (many particles) OR recoil velocity is reduced.

Q-reeus said:
My recollection, somewhat hazy, is that gas particle collisions are rarely more than one-on-one. Expected since collision cross-sections are typically very small to start with.

IIRC in stars you typically have ions present in the radiative zones which may interact in many different ways. Also most photons do not even escape the radiative zone, but are reabsorbed quite quickly opening up the possibility for collective radiative coupling. But they may indeed rarely really collide. That is true. Anyway, I was not stating that it is typical for all stars to see that. I was just pointing out that this is another possibility besides large mass to create small recoil velocity.

Q-reeus said:
If taken by collective - at the instant of recoil, how could it not be, a la Mossbauer, that velocity of recoil is proportionately reduced - hence reduced line broadening?

Either I completely misunderstand you or we are talking past each other. My usage of thermal broadening is that a line is broadened because the emission comes from many atoms/ions/emitters/whatever that have very different velocities. This spread in the velocities causes the line to broaden. The recoil does not alter this. The cool thing Mössbauer achieved was rather the possibility of resonance fluorescence - absorption and emission (if one can even call it that way) of indistinguishable photons.

Q-reeus said:
Reading through again your #24, I now think you were saying conundrum of possibly billions of light-years retrocausality I raised is moot because in practice decoherence intervenes very early on, establishing definiteness to all participants in emission event?

Well, for light emitters I would suppose this is the most likely scenario.

Q-reeus said:
But usual Huygens-Fresnel diffraction pattern assumes continuous wave-train. If single photon propagates as a spherical pulse, in my book one should have roughly spherical diffraction past point of collimator aperture, depending on exact spectrum of pulse. For me this is all nonsensical to start with since it implied that much of a photon was lost to collimation to begin with, a contradiction of the assumption of photon quantization.

Well, not necessarily continuous, but you need to know the time dependence of the phase of the light field at the area of interest in space. I do not see how you can lose much of a photon. You can lose the photon in x of 100 repetitions of an emission event, but it indeed never gets partially lost.

Q-reeus said:
Got me thinking about limiting situation of extremely weak emission coupled with small and cryogenically chilled detection screen. That limit seemed to definitely rule out the notion that energy in screen lattice could gradually accumulate owing to successive partial energy dumps by individual quanta, as energy extraction rate from such dumps would greatly exceed any accumulation rate. Hence threshold energy, required in loading theory, never really arrives, or is at least fantastically unlikely.

Hmm, you can shoot weak emission or even single photons at whatever you like. You may do experiments like antibunching (two detectors never fire simultaneously when you fire a single photon at them) to test loading theory, I think. The joint detection rate should have some dependence on the threshold energy. Anyway, I thought loading theory is dead anyway?

Q-reeus said:
Photoemission would present no conceptual problem for point-like photon model as energy deposits in one quick go. So photoemission/detection rate should here be very model dependent imo. Not aware of any experiments testing that regime though.

It is not too clear to me, why the detection rate should depend on the photon model. In the statistical ensemble, the results will be the same. A purely point-like photon will, however, always create problems when you try to explain interference experiments. Especially two-photon interference gets non-intuitive using a bullet-photon model.
 
  • #31
Cthugha said:
Either I completely misunderstand you or we are talking past each other. My usage of thermal broadening is that a line is broadened because the emission comes from many atoms/ions/emitters/whatever that have very different velocities. This spread in the velocities causes the line to broaden. The recoil does not alter this. The cool thing Mössbauer achieved was rather the possibility of resonance fluorescence - absorption and emission (if one can even call it that way) of indistinguishable photons.
Err, yes - my mia culpa there. Meant frequency down-shift owing to either single-or-two emitter recoil would be much greater than if a collective many-particle sharing recoil were in effect. I was crossing that over in head with that value of thermal line broadening is consistent with more or less free single particles which then in a way comes back to nature of recoil processes going on. Never mind.
Hmm, you can shoot weak emission or even single photons at whatever you like. You may do experiments like antibunching (two detectors never fire simultaneously when you fire a single photon at them) to test loading theory, I think. The joint detection rate should have some dependence on the threshold energy.
An area I know next to nothing about, but may try and chase up. There is one person who has apparently solid evidence for 'funny business' in this matter involving very high energy EM radiation - gamma rays in fact. But I say no more.
Anyway, I thought loading theory is dead anyway?
Pretty sure there are at least one or two proponents lurking here. Again I say no more.
It is not too clear to me, why the detection rate should depend on the photon model. In the statistical ensemble, the results will be the same. A purely point-like photon will, however, always create problems when you try to explain interference experiments. Especially two-photon interference gets non-intuitive using a bullet-photon model.
Yes understand the appeal of wave model and would much otherwise prefer it. And as stated earlier, certainly don't subscribe to a bullet-photon concept. Have no idea if D-B theory or something else holds all the answers interpretation wise. Anyway I'm about done on this but thanks for some stimulating feedback. I had not considered the aspect of environmental decoherence before. Cheers. :zzz:
 

1. What is a photon?

A photon is a fundamental particle of light that carries energy and has zero mass. It is the basic unit of light and is responsible for all electromagnetic radiation, including visible light.

2. How is a photon described qualitatively?

A photon can be described qualitatively by its properties such as energy, wavelength, frequency, and polarization. It can also be described by its behavior, such as its ability to travel at the speed of light and its interaction with matter.

3. How can we observe a photon from a faraway star?

We can observe a photon from a faraway star through telescopes and other instruments that are designed to detect and measure electromagnetic radiation. These instruments can capture and amplify the faint signals of photons coming from distant stars.

4. Why is it important to study the qualitative description of photons from faraway stars?

Studying the qualitative description of photons from faraway stars can provide valuable insights into the properties and behavior of light, which is essential for understanding the universe and developing new technologies. It can also help us learn more about the distant stars and galaxies that emit these photons.

5. How does the qualitative description of photons from faraway stars differ from those from nearby stars?

The qualitative description of photons from faraway stars may differ from those from nearby stars due to the effects of distance and the medium through which the photons travel. These factors can cause the photons to undergo changes in energy, wavelength, and polarization, which can impact their qualitative description.

Similar threads

Replies
18
Views
2K
Replies
16
Views
1K
  • Quantum Physics
Replies
15
Views
2K
  • Electromagnetism
Replies
2
Views
701
  • Advanced Physics Homework Help
Replies
1
Views
911
  • Quantum Physics
Replies
2
Views
1K
  • Quantum Physics
Replies
5
Views
1K
Replies
3
Views
1K
Replies
8
Views
2K
  • Quantum Physics
Replies
17
Views
3K
Back
Top