Undergrad Valid local explanation of Bell violations? (Pegg et al., 1999; 2008)

  • Thread starter Thread starter iste
  • Start date Start date
iste
Messages
240
Reaction score
96
TL;DR
Question of the plausibility of a local explanation of bell violating entanglement specifically in the Kocher-Commins experiment using quantum retrodiction.
These papers by Pegg et al. (doi: 10.1016/j.shpsb.2008.02.003 [section 4]; https://www.researchgate.net/publication/230928426_Retrodiction_in_quantum_optics [section 3.2]) seem to show that photon Bell correlations can be inferred using quantum theory in a manner that is compatible with locality by performing quantum retrodiction (i.e. inferring information about the past: e.g. https://doi.org/10.3390/sym13040586; more papers at end) where they evolve backward from Alice's measured outcome, and then forward again to Bob's. They only specifically model the Kocher-Commins experiment, where retrodicting about an atom-field state in the past tells you the state that Bob's photon was emitted as, and do not seem to have generalized beyond this specific example. Nonetheless, I think there is a general force to the argument that if quantum theory itself actually allows you to retrodict a definite state at the time of a locally-mediated correlating interaction, then non-local influences arguably seem redundant for producing the correct probabilities in entanglement scenarios similar to the one in Pegg et al.

If valid, then clearly the retrodiction from a measurement outcome assigns different states in the backward-in-time direction compared to what the corresponding regular forward-in-time description would start out with, and this looks retrocausal. But at the same time, the quantum retrodiction as described above is just another equivalent way of formulating the same empirical content of quantum theory, and related to regular quantum prediction by Bayes' theorem such that it would always be possible to derive or express backward-in-time probabilities entirely in terms of regular forward-in-time ones. One might then ask whether interpretations in which the wavefunction isn't real avoid any purported need for retrocausation (for instance, in Barandes' indivisible approach, the wavefunction is more or less reducible to a stochastic process description which then doesn't really leave any explicit indicators of retrocausality without importing additional ontological interpretation into the description). It might be worth noting that though the 2008 Pegg et al. paper uses explicit retrocausal language, the later review https://doi.org/10.3390/sym13040586 seems to be much more open to the non-reality of the wavefunction, and at the very least explicitly suggests that collapse should not be seen as a physical process.

I'm aware that this "Parisian zig-zag" explanation has been described by various others; but from what I have skimmed, others don't seem to be explicitly suggesting that it is actually inherent in quantum theory in the same way as these papers. I guess the most important criticisms of this would be whether their description is actually truly a valid way of using quantum theory to explain those correlations, and whether their type of description broadly-speaking can work for any entanglement experiment. Again, I'm inclined to think that if it is valid, it may not be strictly necessary that the explanation is retrocausal, at least with regard to the probabilities. But obviously maybe there are other arguments I'm not thinking of for retrocausality here.

More quantum retrodiction papers:

https://eprints.gla.ac.uk/334605/;
arXiv:1107.5849v4 [e.g. section IVC];
arXiv:2010.05734v2 [e.g. section 2];
https://doi.org/10.1098/rsta.2023.0338; https://strathprints.strath.ac.uk/5854/;
arXiv:quant-ph/0106139;
arXiv:quant-ph/0207086v1
 
Physics news on Phys.org
When I bring up that "researchgate" link, I get the abstract and the references - but not the full article.

From what I gather, in that paper they are saying that a retrodiction can be made based on one measurement that infers conditions that will affect the other measurement. Fine enough - but that retrodiction cannot be made until that first measurement has been made - and the results of that first measurement (and thus the retrodiction) can be based on events not local to the second measurement.

In other words, the retrodiction itself is non-local to the second measurement. Although the second measurement may be within the light cone of conditions described by the retrodiction, it is not within the light cone of the retrodiction itself.
 
.Scott said:
When I bring up that "researchgate" link, I get the abstract and the references - but not the full article.
Thats strange, it gives me as of now both the full article on the page and a pdf link. I can only offer a different link:

https://scholar.google.co.uk/scholar?cluster=11813849258704424520&hl=en&as_sdt=0,5&as_vis=1

.Scott said:
retrodiction cannot be made until that first measurement has been made - and the results of that first measurement (and thus the retrodiction) can be based on events not local to the second measurement.
This is fair though my counter would be that:

1) for all intents and purposes, that first measurement (e.g. Alice) is going to have a random outcome which wouldn't change whether performed simultaneously alongside Bob's or Bob's had never been measured at all or anything else.

2) It seems to me that their retrodiction from that measurement outcome would be the same whether the measurement is on an entangled state or a single-particle state on its own.

Its then not obvious that the explanation depends on anything else going on non-locally, just the first measurement outcome and the atom field-state being retrodicted back to in the past which locally produced the two photon states in a correlated manner, the second of which going off to Bob.
 
iste said:
their retrodiction from that measurement outcome would be the same whether the measurement is on an entangled state or a single-particle state on its own.
If that's true, then their retrodiction can't make correct predictions, because it can't account for the presence of the Bell inequality-violating correlations in the first case, and their absence in the second.
 
PeterDonis said:
If that's true, then their retrodiction can't make correct predictions, because it can't account for the presence of the Bell inequality-violating correlations in the first case, and their absence in the second.
What I mean is that retrodicting back-in-time from the polarization measurement outcome always retrodicts the polarization state associated with that outcome. If you measure H, it retrodicts H back-in-time if the scenario was a measurement on a single-particle state. In the entanglement case, they are using this to retrodict about an atom-field state in the past which emitted another photon correlated to the first, and will produce the correct Bell violating conditional probabilities when you measure that second single particle state that was emitted.
 
iste said:
retrodicting back-in-time from the polarization measurement outcome always retrodicts the polarization state associated with that outcome.
Which is not an entangled state, so, as I said, this can't possibly make correct predictions about Bell inequality violations, which arise from entangled states.

iste said:
If you measure H, it retrodicts H back-in-time if the scenario was a measurement on a single-particle state.
This contradicts what you said before; before you said "always", now you're saying "only if the scenario was a measurement on a single-particle state". But knowing whether or not the measurement was on a single-particle state requires information not local to the measurement, as @.Scott has already pointed out.

iste said:
In the entanglement case, they are using this to retrodict about an atom-field state in the past which emitted another photon correlated to the first, and will produce the correct Bell violating conditional probabilities when you measure that second single particle state that was emitted.
In other words, they can't predict Bell inequality violating correlations at all--they have to have them measured first, and then claim to "retrodict" them. Sounds pointless to me.
 
Last edited:
iste said:
1) for all intents and purposes, that first measurement (e.g. Alice) is going to have a random outcome which wouldn't change whether performed simultaneously alongside Bob's or Bob's had never been measured at all or anything else.
Clearly, if you start with that presumption, everything is local.
But that presumption is wrong.
If it was right, there would be nothing to talk about - because that is the way we all "naturally" expect things to work.

The key to understanding the Bell inequality is to understand how one can demonstrate that that "common sense" presumption is wrong.

And the way that it is done is this:
1) Flip Bob's detector up-side-down so that it is rotated 180 degrees compared to the measurements made by Alice. This makes the rest of the experiment easy to explain.
2) Make 10,000 measurements and compare the Alice/Bob results. They will all match. Basically, each entangled particle is the up-side-down version of its copy.
3) Now rotate Bob's device another 5 degrees and make another 10,000 measurements. The typical result will show about 38 mis-matches - less than half a percent.
4) Now rotate Bob's device another 5 degrees (a total of 10 degrees) and make another 10,000 measurements. The typical result will show about 152 mis-matches.

And that's where the problem comes in. If you start with 0 differences, rotate by 5 degrees and get 38 differences. Then rotating another 5 degrees should give you another 38 difference at most. A total of 76. Somehow the measurement results are directly affected by the angular difference in the measurement settings selected by Alice vs. Bob. And that difference is not available to either Alice or Bob when the measurements are being made.
 
  • Like
Likes martinbn and PeterDonis
PeterDonis said:
now you're saying "only if the scenario was a measurement on a single-particle state".
I meant it does the same in both cases once you measured the outcome and evolve back from it.
PeterDonis said:
Which is not an entangled state, so, as I said, this can't possibly make correct predictions about Bell inequality violations, which arise from entangled states.
The gist of the mechanism is you measure H, retrodict H back at the source. Retrodicting that a H photon was emitted implies another V photon was also emitted due to the atom description at source. Propogating the V photon forward-in-time, staying the same, and then measuring it will give you the cos^2 (θ-θ) conditional probability because Bob has measured the V photon with a cos^2 probability of Bob's analyzer orientation minus the V photon orientation. From the description above, the V photons orientation is conditioned on the other H photon orientation which is the same orientation as Alice's analyzer, which means Bob's cos^2 measurement probability is equivalent to Bob's analyzer orientation minus Alice's analyzer orientation. If you were to somehow modify the correlation at source by 90° and/or -θ, you will get the other three Bell state correlations using this same description.


PeterDonis said:
In other words, they can't predict Bell inequality violating correlations at all--they have to have them measured first, and then claim to "retrodict" them. Sounds pointless to me.
Well, to me, unless the retrodiction is just wrong, it makes nonlocal influence look redundant.
 
iste said:
I meant it does the same in both cases once you measured the outcome and evolve back from it.
I'm not sure what you mean by "the same".

iste said:
The gist of the mechanism is you measure H, retrodict H back at the source.
You're contradicting yourself again. If this is done regardless of whether the system was originally prepared as an entangled state or not, you obviously can't get things right in both cases. But if it's only done if the system was not originally prepared in an entangled state, then what you're saying here makes no sense.

iste said:
Retrodicting that a H photon was emitted implies another V photon was also emitted due to the atom description at source.
Only if the system was originally prepared in an entangled state. If it wasn't, retrodicting H for one photon tells you nothing at all about the other.

iste said:
Propogating the V photon forward-in-time, staying the same, and then measuring it will give you the cos^2 (θ-θ) conditional probability because Bob has measured the V photon with a cos^2 probability of Bob's analyzer orientation minus the V photon orientation.
What do you mean by "the V photon orientation"?

iste said:
From the description above, the V photons orientation is conditioned on the other H photon orientation
This makes no sense to me, unless it's just another way of saying that the two photons are prepared in a particular entangled state.
 
  • #10
.Scott said:
Clearly, if you start with that presumption, everything is local.
But that presumption is wrong.
If it was right, there would be nothing to talk about - because that is the way we all "naturally" expect things to work.

The key to understanding the Bell inequality is to understand how one can demonstrate that that "common sense" presumption is wrong.

And the way that it is done is this:
1) Flip Bob's detector up-side-down so that it is rotated 180 degrees compared to the measurements made by Alice. This makes the rest of the experiment easy to explain.
2) Make 10,000 measurements and compare the Alice/Bob results. They will all match. Basically, each entangled particle is the up-side-down version of its copy.
3) Now rotate Bob's device another 5 degrees and make another 10,000 measurements. The typical result will show about 38 mis-matches - less than half a percent.
4) Now rotate Bob's device another 5 degrees (a total of 10 degrees) and make another 10,000 measurements. The typical result will show about 152 mis-matches.

And that's where the problem comes in. If you start with 0 differences, rotate by 5 degrees and get 38 differences. Then rotating another 5 degrees should give you another 38 difference at most. A total of 76. Somehow the measurement results are directly affected by the angular difference in the measurement settings selected by Alice vs. Bob. And that difference is not available to either Alice or Bob when the measurements are being made.
The papers infer the correct Bell correlation in a manner that looks local, or doesn't require distamt communication in the.inference steps, only using quantum theory. How it does it is the retrodiction appears to violate statistical independence because as far as I can tell, evolving back from a measured polarization outcome results in that same measured state in general when we have no information about the past.
 
  • #11
iste said:
a manner that looks local
I'm not sure how it "looks local" given that, as @.Scott has already pointed out, the information used does not all lie in a single past light cone.
 
  • #12
iste said:
it makes nonlocal influence look redundant.
Again, I'm not sure why. Consider an obvious "hidden variable" interpretation of the retrodiction: measuring H for the first photon sends the H information back in time along the first photon's worldline to the event where the two photons are prepared, and then forward in time along the second photon's worldline to the measurement of the second photon. (This is somewhat similar to John Cramer's transactional interpretation.) Is this "local"? I don't see how. "Local" doesn't allow backwards in time propagation. And any other way of "explaining" how the retrodiction works (assuming it does work) will have the same problem.
 
  • Like
Likes pines-demon
  • #14
PeterDonis said:
I'm not sure how it "looks local" given that, as @.Scott has already pointed out, the information used does not all lie in a single past light cone.
I think it does though because the explanation only relies on a random measurement outcome and what that retrodicts backward-in-time in a manner which only really depends on that outcome.

PeterDonis said:
Again, I'm not sure why. Consider an obvious "hidden variable" interpretation of the retrodiction: measuring H for the first photon sends the H information back in time along the first photon's worldline to the event where the two photons are prepared, and then forward in time along the second photon's worldline to the measurement of the second photon. (This is somewhat similar to John Cramer's transactional interpretation.) Is this "local"? I don't see how. "Local" doesn't allow backwards in time propagation. And any other way of "explaining" how the retrodiction works (assuming it does work) will have the same problem.

Well if Alice's photon is not telling Bob's photon what to do across space instantaneously, but instead they are both being told what to do at the source, and the correlation is preserved because the states don't change during time-evolution - that seems local to me. The question for this purportex explamatiom i guess is whether backward propagation really is something like "time-travel".
 
  • #15
iste said:
Well if Alice's photon is not telling Bob's photon what to do across space instantaneously, but instead they are both being told what to do at the source, and the correlation is preserved because the states don't change during time-evolution - that seems local to me. The question for this purportex explamatiom i guess is whether backward propagation really is something like "time-travel".
It is, it is retrocausality. You might say it is "local" if you wish but you are still breaking causality which seems as or more problematic than "nonlocality" (whatever that means).
 
  • #16
iste said:
I think it does though because the explanation only relies on a random measurement outcome and what that retrodicts backward-in-time in a manner which only really depends on that outcome.
The Bell experiments demonstrate that the measurement results are the result of both measurement choices. The the only emitter contribution that is demonstrated by the experiments is that the particles were created entangled.

iste said:
Well if Alice's photon is not telling Bob's photon what to do across space instantaneously, but instead they are both being told what to do at the source, and the correlation is preserved because the states don't change during time-evolution - that seems local to me.
The "normal" (ie, naive) expectation would be that some kind of hidden value (like a polarization angle) is created at the emitter and carried by each photon - then, when the measuring device is encountered, the result is some function of that hidden value and the measurement orientation. That is what you are describing and that would be local.

But the actual results from the Bell experiments demonstrate that isn't what's happening. Bell-type experiments prove that two 15-degree measurement rotations are not the same as one 30-degree measurement rotation. So the result must also be a function of the angular difference between the measuring devices.

Each photon in the pair does not discover the orientation of its own measuring device until it reaches that device. So, not only does the emitter not have enough local information when it emits the photons, the combination of both photons do not have enough information until both of them reach their respective measuring devices. And just to further confound this situation, because of Special Relativity, there is no "preferred" sequence to which detector (Alice or Bobs) makes the measurement first. Reference frames are available that would show Alice before Bob, Bob before Alice, or both at once. So, it is completely fair to say that Alice's measurement result is affected by Bob's measurement orientation, even when (from your reference frame), Bob's measurement orientation has not even been chosen yet.

iste said:
The question for this purported explanation i guess is whether backward propagation really is something like "time-travel".
Backward propagation is non-local. But more importantly, once you have demonstrated that the system operates non-locally, you should be aware that terms like "propagation" suggest a very local cause-effect sequence. If interpreting this system as having "backward propagation" helps you in working out what is going on, the go with it. But what is basic is the QM numerical predictions and the experimental results. Everything beyond that is just the sweet syrup needed to swallow the pill.

With regard to time travel: At it's essence, time travel would involve sending information back in time - or , at least, outside your light cone. These non-local effects "compromise" light cones - but with no chance of ever allowing information to be sent.
 
Last edited:
  • #17
pines-demon said:
It is, it is retrocausality. You might say it is "local" if you wish but you are still breaking causality which seems as or more problematic than "nonlocality" (whatever that means).
Maybe, but I think that if this retrodiction is problematic, you have to ask why it seems to exist as an option in quantum theory in the first place that can be used in various applications as described in the papers of the original post. If they both to some extent already seems to exist in the theory then it might be more parsimonious to ditch "nonlocality" if one can give an explanation of that in terms of retrodiction.
 
  • #18
pines-demon said:
It is, it is retrocausality. You might say it is "local" if you wish but you are still breaking causality which seems as or more problematic than "nonlocality" (whatever that means).
I'm having trouble finding John Bell's original 1964 paper. But it discussed "Local Reality Theorem" at length, and as I recall, it included the term "non-local".

I did find a source that quotes a section of Bell's original paper. Here is an excerpt that uses the term "non-local" in context:

The paradox of Einstein, Podolsky and Rosen was advanced as an argument that quantum mechanics could not be a complete theory but should be supplemented by additional variables. These additional variables were to restore to the theory causality and locality. In this note that idea will be formulated mathematically and shown to be incompatible with the statistical predictions of quantum mechanics. It is the requirement of locality, or more precisely that the result of a measurement on one system be unaffected by operations on a distant system with which it has interacted in the past, that creates the essential difficulty. There have been attempts [by von Neumann] to show that even without such a separability or locality requirement no ‘hidden variable’ interpretation of quantum mechanics is possible. These attempts have been examined [by Bell] elsewhere and found wanting. Moreover, a hidden variable interpretation of elementary quantum theory has been explicitly constructed [by Bohm]. That particular interpretation has indeed a gross non-local structure. This is characteristic, according to the result to be proved here, of any such theory which reproduces exactly the quantum mechanical predictions
 
  • #19
.Scott said:
The Bell experiments demonstrate that the measurement results are the result of both measurement choices. The the only emitter contribution that is demonstrated by the experiments is that the particles were created entangled.


The "normal" (ie, naive) expectation would be that some kind of hidden value (like a polarization angle) is created at the emitter and carried by each photon - then, when the measuring device is encountered, the result is some function of that hidden value and the measurement orientation. That is what you are describing and that would be local.

But the actual results from the Bell experiments demonstrate that isn't what's happening. Bell-type experiments prove that two 15-degree measurement rotations are not the same as one 30-degree measurement rotation. So the result must also be a function of the angular difference between the measuring devices.

Each photon in the pair does not discover the orientation of its own measuring device until it reaches that device. So, not only does the emitter not have enough local information when it emits the photons, the combination of both photons do not have enough information until both of them reach their respective measuring devices. And just to further confound this situation, because of Special Relativity, there is no "preferred" sequence to which detector (Alice or Bobs) makes the measurement first. Reference frames are available that would show Alice before Bob, Bob before Alice, or both at once. So, it is completely fair to say that Alice's measurement result is affected by Bob's measurement orientation, even when (from your reference frame), Bob's measurement orientation has not even been chosen yet.


Backward propagation is non-local. But more importantly, once you have demonstrated that the system operates non-locally, you should be aware that terms like "propagation" suggest a very local cause-effect sequence. If interpreting this system as having "backward propagation" helps you in working out what is going on, the go with it. But was is basic is the QM numerical predictions and the experimental results. Everything beyond that is just the sweet syrup needed to swallow the pill.

With regard to time travel: At it's essence, time travel would involve sending information back in time - or , at least, outside your light cone. These non-local effects "compromise" light cones - but with no chance of ever allowing information to be sent.
The description used in the papers is done exclusively with quantum theory, so I think what it comes down to is whether backward propagation is non-local. I think at least from the perspective of relativity, it is local. Local in some deeper sense depends on whether this backpropagation is actual retrocausation or merely retrodiction. At the level of probabilities this doesn't seem to be retrocausal to me because the forward and backward descriptions are related by Bayes' theorem; and because the conventional forward-in-time entanglement descriiption only really gives you the measurement results, I don't see any explicit contradictions in terms of probabilities. But this is just at the level of probabilities whereas I think the obviousness of retrocausation would come from the overt changes to the quantum state when retrodicting.
 
  • #20
iste said:
the explanation only relies on a random measurement outcome
No, it relies on two "random" measurement outcomes which are spacelike separated, but which are nevertheless correlated in a way that violates the Bell inequalities. As I have already commented, the "explanation" implicitly relies on some kind of information traveling backwards in time from one measurement to the common preparation event, then forward in time to the other measurement. Calling that "local" is IMO a gross abuse of language.

iste said:
if Alice's photon is not telling Bob's photon what to do across space instantaneously, but instead they are both being told what to do at the source
No, neither of these is what the "explanation" is saying. The "explanation" is saying that Alice's photon, when measured, tells the source what its result was (backwards in time), and then the source tells Bob's photon (forwards in time). Just invoking the source alone telling Alice's and Bob's photons what to do is not enough, because the source, before Alice's measurement is made, doesn't know what Alice's result is, and knowledge of Alice's result is crucial in the "explanation" that's being provided.
 
  • #21
iste said:
I think what it comes down to is whether backward propagation is non-local. I think at least from the perspective of relativity, it is local.
I think you need to justify this statement with some kind of reference to the literature. AFAIK nobody has ever tried to defend this point of view.
 
  • #22
iste said:
Maybe, but I think that if this retrodiction is problematic, you have to ask why it seems to exist as an option in quantum theory in the first place that can be used in various applications as described in the papers of the original post. If they both to some extent already seems to exist in the theory then it might be more parsimonious to ditch "nonlocality" if one can give an explanation of that in terms of retrodiction.
I think I know what you are getting at.
It is common to see the term "quantum indeterminacy" melded into terms like "random" and non-deterministic - and perhaps I will start a thread on that topic sometime.

"Retrodiction", and specifically the way in which you are using the term, presumes a deterministic universe. So, by that logic, not only were the results of the experiment inherent during the Big Bang, all of mankind's development and the fact that such experiments would be conducted has also been inherent all along. This line of thought falls under the heading of "Superdeterminism". So, in a very real sense, everything may already be "local". That wiki article addresses this.

But expanding the term "local" to include the effects of superdeterminism is not a path I would recommend. Superdeterminism involves machinery that current Physics has not resolved - it is pretty much Terra Incognita. More importantly, it involves machinery that would be limited by Physics in its ability to reproduce actual Bell experiments (or any other real-life activity) because it would require that this huge amount of universal "local" information be copied into the machine - which would likely amount to a copyright non-cloning violation.

And there is another problem with retrodiction. What you are hoping to do is to find a retrodictive boundary to your experiment. But there is no guarantee that any such boundary (short of "everything") exists. Presuming that both measurements have sufficient information to deduce the others measurement orientation, how far back in history did the retrodiction have to go to collect that information? About 9 years ago, physicists at MIT used 600-year-old starlight to make the Alice/Bob measurement decisions. So any retrodiction would have to go back well before that to explain how the needed information could be available locally.

It's probably fair to say that given the current lack of depth in our understanding of both "non-locality" and "superdeterminism" and with the common reasons they have for being invoked, they should be considered two different interpretations of what we see in Bell experiments.
 
  • Like
Likes martinbn and PeterDonis
  • #23
iste said:
I think that if this retrodiction is problematic, you have to ask why it seems to exist as an option in quantum theory in the first place
It doesn't, in any sense that matters.

What you are calling "retrodiction" is not actually "retrodicting" anything. You're not using the result of Alice's experiment to retrodict what the state of the source was in the past. You can't, because the result of Alice's measurement, by itself, doesn't tell you whether the source emitted a pair of entangled photons or not--i.e., whether Alice's photon is entangled with Bob's or not. So you can't actually "retrodict" anything from Alice's measurement result.

As has already been said, what the paper is actually doing (whether its authors realize it or not--I'm not sure they do) is postulating information being sent backwards in time from Alice's measurement to the source event. That's not "retrodiction". That's backwards in time causation.
 

Similar threads

  • · Replies 31 ·
2
Replies
31
Views
3K
  • · Replies 74 ·
3
Replies
74
Views
4K
  • · Replies 333 ·
12
Replies
333
Views
19K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 35 ·
2
Replies
35
Views
1K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 87 ·
3
Replies
87
Views
10K
  • · Replies 25 ·
Replies
25
Views
13K