Undergrad Valid local explanation of Bell violations? (Pegg et al., 1999; 2008)

  • Thread starter Thread starter iste
  • Start date Start date
iste
Messages
244
Reaction score
97
TL;DR
Question of the plausibility of a local explanation of bell violating entanglement specifically in the Kocher-Commins experiment using quantum retrodiction.
These papers by Pegg et al. (doi: 10.1016/j.shpsb.2008.02.003 [section 4]; https://www.researchgate.net/publication/230928426_Retrodiction_in_quantum_optics [section 3.2]) seem to show that photon Bell correlations can be inferred using quantum theory in a manner that is compatible with locality by performing quantum retrodiction (i.e. inferring information about the past: e.g. https://doi.org/10.3390/sym13040586; more papers at end) where they evolve backward from Alice's measured outcome, and then forward again to Bob's. They only specifically model the Kocher-Commins experiment, where retrodicting about an atom-field state in the past tells you the state that Bob's photon was emitted as, and do not seem to have generalized beyond this specific example. Nonetheless, I think there is a general force to the argument that if quantum theory itself actually allows you to retrodict a definite state at the time of a locally-mediated correlating interaction, then non-local influences arguably seem redundant for producing the correct probabilities in entanglement scenarios similar to the one in Pegg et al.

If valid, then clearly the retrodiction from a measurement outcome assigns different states in the backward-in-time direction compared to what the corresponding regular forward-in-time description would start out with, and this looks retrocausal. But at the same time, the quantum retrodiction as described above is just another equivalent way of formulating the same empirical content of quantum theory, and related to regular quantum prediction by Bayes' theorem such that it would always be possible to derive or express backward-in-time probabilities entirely in terms of regular forward-in-time ones. One might then ask whether interpretations in which the wavefunction isn't real avoid any purported need for retrocausation (for instance, in Barandes' indivisible approach, the wavefunction is more or less reducible to a stochastic process description which then doesn't really leave any explicit indicators of retrocausality without importing additional ontological interpretation into the description). It might be worth noting that though the 2008 Pegg et al. paper uses explicit retrocausal language, the later review https://doi.org/10.3390/sym13040586 seems to be much more open to the non-reality of the wavefunction, and at the very least explicitly suggests that collapse should not be seen as a physical process.

I'm aware that this "Parisian zig-zag" explanation has been described by various others; but from what I have skimmed, others don't seem to be explicitly suggesting that it is actually inherent in quantum theory in the same way as these papers. I guess the most important criticisms of this would be whether their description is actually truly a valid way of using quantum theory to explain those correlations, and whether their type of description broadly-speaking can work for any entanglement experiment. Again, I'm inclined to think that if it is valid, it may not be strictly necessary that the explanation is retrocausal, at least with regard to the probabilities. But obviously maybe there are other arguments I'm not thinking of for retrocausality here.

More quantum retrodiction papers:

https://eprints.gla.ac.uk/334605/;
arXiv:1107.5849v4 [e.g. section IVC];
arXiv:2010.05734v2 [e.g. section 2];
https://doi.org/10.1098/rsta.2023.0338; https://strathprints.strath.ac.uk/5854/;
arXiv:quant-ph/0106139;
arXiv:quant-ph/0207086v1
 
  • Like
Likes MrRobotoToo
Physics news on Phys.org
When I bring up that "researchgate" link, I get the abstract and the references - but not the full article.

From what I gather, in that paper they are saying that a retrodiction can be made based on one measurement that infers conditions that will affect the other measurement. Fine enough - but that retrodiction cannot be made until that first measurement has been made - and the results of that first measurement (and thus the retrodiction) can be based on events not local to the second measurement.

In other words, the retrodiction itself is non-local to the second measurement. Although the second measurement may be within the light cone of conditions described by the retrodiction, it is not within the light cone of the retrodiction itself.
 
.Scott said:
When I bring up that "researchgate" link, I get the abstract and the references - but not the full article.
Thats strange, it gives me as of now both the full article on the page and a pdf link. I can only offer a different link:

https://scholar.google.co.uk/scholar?cluster=11813849258704424520&hl=en&as_sdt=0,5&as_vis=1

.Scott said:
retrodiction cannot be made until that first measurement has been made - and the results of that first measurement (and thus the retrodiction) can be based on events not local to the second measurement.
This is fair though my counter would be that:

1) for all intents and purposes, that first measurement (e.g. Alice) is going to have a random outcome which wouldn't change whether performed simultaneously alongside Bob's or Bob's had never been measured at all or anything else.

2) It seems to me that their retrodiction from that measurement outcome would be the same whether the measurement is on an entangled state or a single-particle state on its own.

Its then not obvious that the explanation depends on anything else going on non-locally, just the first measurement outcome and the atom field-state being retrodicted back to in the past which locally produced the two photon states in a correlated manner, the second of which going off to Bob.
 
iste said:
their retrodiction from that measurement outcome would be the same whether the measurement is on an entangled state or a single-particle state on its own.
If that's true, then their retrodiction can't make correct predictions, because it can't account for the presence of the Bell inequality-violating correlations in the first case, and their absence in the second.
 
PeterDonis said:
If that's true, then their retrodiction can't make correct predictions, because it can't account for the presence of the Bell inequality-violating correlations in the first case, and their absence in the second.
What I mean is that retrodicting back-in-time from the polarization measurement outcome always retrodicts the polarization state associated with that outcome. If you measure H, it retrodicts H back-in-time if the scenario was a measurement on a single-particle state. In the entanglement case, they are using this to retrodict about an atom-field state in the past which emitted another photon correlated to the first, and will produce the correct Bell violating conditional probabilities when you measure that second single particle state that was emitted.
 
iste said:
retrodicting back-in-time from the polarization measurement outcome always retrodicts the polarization state associated with that outcome.
Which is not an entangled state, so, as I said, this can't possibly make correct predictions about Bell inequality violations, which arise from entangled states.

iste said:
If you measure H, it retrodicts H back-in-time if the scenario was a measurement on a single-particle state.
This contradicts what you said before; before you said "always", now you're saying "only if the scenario was a measurement on a single-particle state". But knowing whether or not the measurement was on a single-particle state requires information not local to the measurement, as @.Scott has already pointed out.

iste said:
In the entanglement case, they are using this to retrodict about an atom-field state in the past which emitted another photon correlated to the first, and will produce the correct Bell violating conditional probabilities when you measure that second single particle state that was emitted.
In other words, they can't predict Bell inequality violating correlations at all--they have to have them measured first, and then claim to "retrodict" them. Sounds pointless to me.
 
Last edited:
iste said:
1) for all intents and purposes, that first measurement (e.g. Alice) is going to have a random outcome which wouldn't change whether performed simultaneously alongside Bob's or Bob's had never been measured at all or anything else.
Clearly, if you start with that presumption, everything is local.
But that presumption is wrong.
If it was right, there would be nothing to talk about - because that is the way we all "naturally" expect things to work.

The key to understanding the Bell inequality is to understand how one can demonstrate that that "common sense" presumption is wrong.

And the way that it is done is this:
1) Flip Bob's detector up-side-down so that it is rotated 180 degrees compared to the measurements made by Alice. This makes the rest of the experiment easy to explain.
2) Make 10,000 measurements and compare the Alice/Bob results. They will all match. Basically, each entangled particle is the up-side-down version of its copy.
3) Now rotate Bob's device another 5 degrees and make another 10,000 measurements. The typical result will show about 38 mis-matches - less than half a percent.
4) Now rotate Bob's device another 5 degrees (a total of 10 degrees) and make another 10,000 measurements. The typical result will show about 152 mis-matches.

And that's where the problem comes in. If you start with 0 differences, rotate by 5 degrees and get 38 differences. Then rotating another 5 degrees should give you another 38 difference at most. A total of 76. Somehow the measurement results are directly affected by the angular difference in the measurement settings selected by Alice vs. Bob. And that difference is not available to either Alice or Bob when the measurements are being made.
 
  • Like
Likes martinbn and PeterDonis
PeterDonis said:
now you're saying "only if the scenario was a measurement on a single-particle state".
I meant it does the same in both cases once you measured the outcome and evolve back from it.
PeterDonis said:
Which is not an entangled state, so, as I said, this can't possibly make correct predictions about Bell inequality violations, which arise from entangled states.
The gist of the mechanism is you measure H, retrodict H back at the source. Retrodicting that a H photon was emitted implies another V photon was also emitted due to the atom description at source. Propogating the V photon forward-in-time, staying the same, and then measuring it will give you the cos^2 (θ-θ) conditional probability because Bob has measured the V photon with a cos^2 probability of Bob's analyzer orientation minus the V photon orientation. From the description above, the V photons orientation is conditioned on the other H photon orientation which is the same orientation as Alice's analyzer, which means Bob's cos^2 measurement probability is equivalent to Bob's analyzer orientation minus Alice's analyzer orientation. If you were to somehow modify the correlation at source by 90° and/or -θ, you will get the other three Bell state correlations using this same description.


PeterDonis said:
In other words, they can't predict Bell inequality violating correlations at all--they have to have them measured first, and then claim to "retrodict" them. Sounds pointless to me.
Well, to me, unless the retrodiction is just wrong, it makes nonlocal influence look redundant.
 
iste said:
I meant it does the same in both cases once you measured the outcome and evolve back from it.
I'm not sure what you mean by "the same".

iste said:
The gist of the mechanism is you measure H, retrodict H back at the source.
You're contradicting yourself again. If this is done regardless of whether the system was originally prepared as an entangled state or not, you obviously can't get things right in both cases. But if it's only done if the system was not originally prepared in an entangled state, then what you're saying here makes no sense.

iste said:
Retrodicting that a H photon was emitted implies another V photon was also emitted due to the atom description at source.
Only if the system was originally prepared in an entangled state. If it wasn't, retrodicting H for one photon tells you nothing at all about the other.

iste said:
Propogating the V photon forward-in-time, staying the same, and then measuring it will give you the cos^2 (θ-θ) conditional probability because Bob has measured the V photon with a cos^2 probability of Bob's analyzer orientation minus the V photon orientation.
What do you mean by "the V photon orientation"?

iste said:
From the description above, the V photons orientation is conditioned on the other H photon orientation
This makes no sense to me, unless it's just another way of saying that the two photons are prepared in a particular entangled state.
 
  • #10
.Scott said:
Clearly, if you start with that presumption, everything is local.
But that presumption is wrong.
If it was right, there would be nothing to talk about - because that is the way we all "naturally" expect things to work.

The key to understanding the Bell inequality is to understand how one can demonstrate that that "common sense" presumption is wrong.

And the way that it is done is this:
1) Flip Bob's detector up-side-down so that it is rotated 180 degrees compared to the measurements made by Alice. This makes the rest of the experiment easy to explain.
2) Make 10,000 measurements and compare the Alice/Bob results. They will all match. Basically, each entangled particle is the up-side-down version of its copy.
3) Now rotate Bob's device another 5 degrees and make another 10,000 measurements. The typical result will show about 38 mis-matches - less than half a percent.
4) Now rotate Bob's device another 5 degrees (a total of 10 degrees) and make another 10,000 measurements. The typical result will show about 152 mis-matches.

And that's where the problem comes in. If you start with 0 differences, rotate by 5 degrees and get 38 differences. Then rotating another 5 degrees should give you another 38 difference at most. A total of 76. Somehow the measurement results are directly affected by the angular difference in the measurement settings selected by Alice vs. Bob. And that difference is not available to either Alice or Bob when the measurements are being made.
The papers infer the correct Bell correlation in a manner that looks local, or doesn't require distamt communication in the.inference steps, only using quantum theory. How it does it is the retrodiction appears to violate statistical independence because as far as I can tell, evolving back from a measured polarization outcome results in that same measured state in general when we have no information about the past.
 
  • #11
iste said:
a manner that looks local
I'm not sure how it "looks local" given that, as @.Scott has already pointed out, the information used does not all lie in a single past light cone.
 
  • #12
iste said:
it makes nonlocal influence look redundant.
Again, I'm not sure why. Consider an obvious "hidden variable" interpretation of the retrodiction: measuring H for the first photon sends the H information back in time along the first photon's worldline to the event where the two photons are prepared, and then forward in time along the second photon's worldline to the measurement of the second photon. (This is somewhat similar to John Cramer's transactional interpretation.) Is this "local"? I don't see how. "Local" doesn't allow backwards in time propagation. And any other way of "explaining" how the retrodiction works (assuming it does work) will have the same problem.
 
  • Like
Likes pines-demon
  • #14
PeterDonis said:
I'm not sure how it "looks local" given that, as @.Scott has already pointed out, the information used does not all lie in a single past light cone.
I think it does though because the explanation only relies on a random measurement outcome and what that retrodicts backward-in-time in a manner which only really depends on that outcome.

PeterDonis said:
Again, I'm not sure why. Consider an obvious "hidden variable" interpretation of the retrodiction: measuring H for the first photon sends the H information back in time along the first photon's worldline to the event where the two photons are prepared, and then forward in time along the second photon's worldline to the measurement of the second photon. (This is somewhat similar to John Cramer's transactional interpretation.) Is this "local"? I don't see how. "Local" doesn't allow backwards in time propagation. And any other way of "explaining" how the retrodiction works (assuming it does work) will have the same problem.

Well if Alice's photon is not telling Bob's photon what to do across space instantaneously, but instead they are both being told what to do at the source, and the correlation is preserved because the states don't change during time-evolution - that seems local to me. The question for this purportex explamatiom i guess is whether backward propagation really is something like "time-travel".
 
  • #15
iste said:
Well if Alice's photon is not telling Bob's photon what to do across space instantaneously, but instead they are both being told what to do at the source, and the correlation is preserved because the states don't change during time-evolution - that seems local to me. The question for this purportex explamatiom i guess is whether backward propagation really is something like "time-travel".
It is, it is retrocausality. You might say it is "local" if you wish but you are still breaking causality which seems as or more problematic than "nonlocality" (whatever that means).
 
  • #16
iste said:
I think it does though because the explanation only relies on a random measurement outcome and what that retrodicts backward-in-time in a manner which only really depends on that outcome.
The Bell experiments demonstrate that the measurement results are the result of both measurement choices. The the only emitter contribution that is demonstrated by the experiments is that the particles were created entangled.

iste said:
Well if Alice's photon is not telling Bob's photon what to do across space instantaneously, but instead they are both being told what to do at the source, and the correlation is preserved because the states don't change during time-evolution - that seems local to me.
The "normal" (ie, naive) expectation would be that some kind of hidden value (like a polarization angle) is created at the emitter and carried by each photon - then, when the measuring device is encountered, the result is some function of that hidden value and the measurement orientation. That is what you are describing and that would be local.

But the actual results from the Bell experiments demonstrate that isn't what's happening. Bell-type experiments prove that two 15-degree measurement rotations are not the same as one 30-degree measurement rotation. So the result must also be a function of the angular difference between the measuring devices.

Each photon in the pair does not discover the orientation of its own measuring device until it reaches that device. So, not only does the emitter not have enough local information when it emits the photons, the combination of both photons do not have enough information until both of them reach their respective measuring devices. And just to further confound this situation, because of Special Relativity, there is no "preferred" sequence to which detector (Alice or Bobs) makes the measurement first. Reference frames are available that would show Alice before Bob, Bob before Alice, or both at once. So, it is completely fair to say that Alice's measurement result is affected by Bob's measurement orientation, even when (from your reference frame), Bob's measurement orientation has not even been chosen yet.

iste said:
The question for this purported explanation i guess is whether backward propagation really is something like "time-travel".
Backward propagation is non-local. But more importantly, once you have demonstrated that the system operates non-locally, you should be aware that terms like "propagation" suggest a very local cause-effect sequence. If interpreting this system as having "backward propagation" helps you in working out what is going on, the go with it. But what is basic is the QM numerical predictions and the experimental results. Everything beyond that is just the sweet syrup needed to swallow the pill.

With regard to time travel: At it's essence, time travel would involve sending information back in time - or , at least, outside your light cone. These non-local effects "compromise" light cones - but with no chance of ever allowing information to be sent.
 
Last edited:
  • #17
pines-demon said:
It is, it is retrocausality. You might say it is "local" if you wish but you are still breaking causality which seems as or more problematic than "nonlocality" (whatever that means).
Maybe, but I think that if this retrodiction is problematic, you have to ask why it seems to exist as an option in quantum theory in the first place that can be used in various applications as described in the papers of the original post. If they both to some extent already seems to exist in the theory then it might be more parsimonious to ditch "nonlocality" if one can give an explanation of that in terms of retrodiction.
 
  • #18
pines-demon said:
It is, it is retrocausality. You might say it is "local" if you wish but you are still breaking causality which seems as or more problematic than "nonlocality" (whatever that means).
I'm having trouble finding John Bell's original 1964 paper. But it discussed "Local Reality Theorem" at length, and as I recall, it included the term "non-local".

I did find a source that quotes a section of Bell's original paper. Here is an excerpt that uses the term "non-local" in context:

The paradox of Einstein, Podolsky and Rosen was advanced as an argument that quantum mechanics could not be a complete theory but should be supplemented by additional variables. These additional variables were to restore to the theory causality and locality. In this note that idea will be formulated mathematically and shown to be incompatible with the statistical predictions of quantum mechanics. It is the requirement of locality, or more precisely that the result of a measurement on one system be unaffected by operations on a distant system with which it has interacted in the past, that creates the essential difficulty. There have been attempts [by von Neumann] to show that even without such a separability or locality requirement no ‘hidden variable’ interpretation of quantum mechanics is possible. These attempts have been examined [by Bell] elsewhere and found wanting. Moreover, a hidden variable interpretation of elementary quantum theory has been explicitly constructed [by Bohm]. That particular interpretation has indeed a gross non-local structure. This is characteristic, according to the result to be proved here, of any such theory which reproduces exactly the quantum mechanical predictions
 
  • #19
.Scott said:
The Bell experiments demonstrate that the measurement results are the result of both measurement choices. The the only emitter contribution that is demonstrated by the experiments is that the particles were created entangled.


The "normal" (ie, naive) expectation would be that some kind of hidden value (like a polarization angle) is created at the emitter and carried by each photon - then, when the measuring device is encountered, the result is some function of that hidden value and the measurement orientation. That is what you are describing and that would be local.

But the actual results from the Bell experiments demonstrate that isn't what's happening. Bell-type experiments prove that two 15-degree measurement rotations are not the same as one 30-degree measurement rotation. So the result must also be a function of the angular difference between the measuring devices.

Each photon in the pair does not discover the orientation of its own measuring device until it reaches that device. So, not only does the emitter not have enough local information when it emits the photons, the combination of both photons do not have enough information until both of them reach their respective measuring devices. And just to further confound this situation, because of Special Relativity, there is no "preferred" sequence to which detector (Alice or Bobs) makes the measurement first. Reference frames are available that would show Alice before Bob, Bob before Alice, or both at once. So, it is completely fair to say that Alice's measurement result is affected by Bob's measurement orientation, even when (from your reference frame), Bob's measurement orientation has not even been chosen yet.


Backward propagation is non-local. But more importantly, once you have demonstrated that the system operates non-locally, you should be aware that terms like "propagation" suggest a very local cause-effect sequence. If interpreting this system as having "backward propagation" helps you in working out what is going on, the go with it. But was is basic is the QM numerical predictions and the experimental results. Everything beyond that is just the sweet syrup needed to swallow the pill.

With regard to time travel: At it's essence, time travel would involve sending information back in time - or , at least, outside your light cone. These non-local effects "compromise" light cones - but with no chance of ever allowing information to be sent.
The description used in the papers is done exclusively with quantum theory, so I think what it comes down to is whether backward propagation is non-local. I think at least from the perspective of relativity, it is local. Local in some deeper sense depends on whether this backpropagation is actual retrocausation or merely retrodiction. At the level of probabilities this doesn't seem to be retrocausal to me because the forward and backward descriptions are related by Bayes' theorem; and because the conventional forward-in-time entanglement descriiption only really gives you the measurement results, I don't see any explicit contradictions in terms of probabilities. But this is just at the level of probabilities whereas I think the obviousness of retrocausation would come from the overt changes to the quantum state when retrodicting.
 
  • #20
iste said:
the explanation only relies on a random measurement outcome
No, it relies on two "random" measurement outcomes which are spacelike separated, but which are nevertheless correlated in a way that violates the Bell inequalities. As I have already commented, the "explanation" implicitly relies on some kind of information traveling backwards in time from one measurement to the common preparation event, then forward in time to the other measurement. Calling that "local" is IMO a gross abuse of language.

iste said:
if Alice's photon is not telling Bob's photon what to do across space instantaneously, but instead they are both being told what to do at the source
No, neither of these is what the "explanation" is saying. The "explanation" is saying that Alice's photon, when measured, tells the source what its result was (backwards in time), and then the source tells Bob's photon (forwards in time). Just invoking the source alone telling Alice's and Bob's photons what to do is not enough, because the source, before Alice's measurement is made, doesn't know what Alice's result is, and knowledge of Alice's result is crucial in the "explanation" that's being provided.
 
  • #21
iste said:
I think what it comes down to is whether backward propagation is non-local. I think at least from the perspective of relativity, it is local.
I think you need to justify this statement with some kind of reference to the literature. AFAIK nobody has ever tried to defend this point of view.
 
  • #22
iste said:
Maybe, but I think that if this retrodiction is problematic, you have to ask why it seems to exist as an option in quantum theory in the first place that can be used in various applications as described in the papers of the original post. If they both to some extent already seems to exist in the theory then it might be more parsimonious to ditch "nonlocality" if one can give an explanation of that in terms of retrodiction.
I think I know what you are getting at.
It is common to see the term "quantum indeterminacy" melded into terms like "random" and non-deterministic - and perhaps I will start a thread on that topic sometime.

"Retrodiction", and specifically the way in which you are using the term, presumes a deterministic universe. So, by that logic, not only were the results of the experiment inherent during the Big Bang, all of mankind's development and the fact that such experiments would be conducted has also been inherent all along. This line of thought falls under the heading of "Superdeterminism". So, in a very real sense, everything may already be "local". That wiki article addresses this.

But expanding the term "local" to include the effects of superdeterminism is not a path I would recommend. Superdeterminism involves machinery that current Physics has not resolved - it is pretty much Terra Incognita. More importantly, it involves machinery that would be limited by Physics in its ability to reproduce actual Bell experiments (or any other real-life activity) because it would require that this huge amount of universal "local" information be copied into the machine - which would likely amount to a copyright non-cloning violation.

And there is another problem with retrodiction. What you are hoping to do is to find a retrodictive boundary to your experiment. But there is no guarantee that any such boundary (short of "everything") exists. Presuming that both measurements have sufficient information to deduce the others measurement orientation, how far back in history did the retrodiction have to go to collect that information? About 9 years ago, physicists at MIT used 600-year-old starlight to make the Alice/Bob measurement decisions. So any retrodiction would have to go back well before that to explain how the needed information could be available locally.

It's probably fair to say that given the current lack of depth in our understanding of both "non-locality" and "superdeterminism" and with the common reasons they have for being invoked, they should be considered two different interpretations of what we see in Bell experiments.
 
  • Like
Likes martinbn and PeterDonis
  • #23
iste said:
I think that if this retrodiction is problematic, you have to ask why it seems to exist as an option in quantum theory in the first place
It doesn't, in any sense that matters.

What you are calling "retrodiction" is not actually "retrodicting" anything. You're not using the result of Alice's experiment to retrodict what the state of the source was in the past. You can't, because the result of Alice's measurement, by itself, doesn't tell you whether the source emitted a pair of entangled photons or not--i.e., whether Alice's photon is entangled with Bob's or not. So you can't actually "retrodict" anything from Alice's measurement result.

As has already been said, what the paper is actually doing (whether its authors realize it or not--I'm not sure they do) is postulating information being sent backwards in time from Alice's measurement to the source event. That's not "retrodiction". That's backwards in time causation.
 
  • #24
PeterDonis said:
As I have already commented, the "explanation" implicitly relies on some kind of information traveling backwards in time from one measurement to the common preparation event, then forward in time to the other measurement. Calling that "local" is IMO a gross abuse of language.
If it relies on information going backward-in-time, it doesn't rely on the other spatially separated measurement in a manner which is not mediated by the source.

When I say local, I just mean about whether it is in conflict with relativity in the sense of requiring faster than light influence across space and then perhaps requiring a preferred reference frame due to relativity of simultaneity or something like that. From that standpoint, this description is local. Sure, you can say that if it is genuine retrocausality then it is not local temporally, I will accept that, but I think its still local spatially with regard to communication between Alice and Bob.

PeterDonis said:
I think you need to justify this statement with some kind of reference to the literature. AFAIK nobody has ever tried to defend this point of view
I referred to in my original post that this is a version of the Parisian zig-zag by various others with the motivation that it could be a viable local explanation of entanglement: e.g.

arXiv:1906.04313v3
https://arxiv.org/abs/1706.02290


PeterDonis said:
As has already been said, what the paper is actually doing (whether its authors realize it or not--I'm not sure they do) is postulating information being sent backwards in time from Alice's measurement to the source event. That's not "retrodiction". That's backwards in time causation.
The point of the sentence you quoted, in the context of what I was replying to, was just that you have to explain why quantum theory lets you do this retrocausation. They are doing this with quantum theory as part of a general method that can be applied to various things to get correct answers, so the question is why does quantum theory let them do this is if it is purely postulation. At the same time, I I don't think I have seen as of yet a knock down argument that this is "backwards in time causation" cannot actually be retrodiction; it seems this is interpretation-dependent and most people think of the wavefunction as a fundamental physical object so it must be backwards in time causation.
 
  • #25
.Scott said:
presumes a deterministic universe
I don't think so. Its related to Bayes' theorem about probabilities so I can't see this as making any presumption about determinism.

.Scott said:
And there is another problem with retrodiction. What you are hoping to do is to find a retrodictive boundary to your experiment

As far as I can tell, the only retrodictive boundary required in these entanglement experiments in the conte t of the retrodiction as formulated in the papers is a lack of any information about the past, and it seems to me this is already implied by the way these experiments are prepared which always produce completely random measurement probabilities
 
  • #26
I will give my take here on why it is not obvious that the backpropagation is "time-travel", connected to the fact that the quantum retrodiction is related to the normal forward-in-time usage by Bayes' rule. You can then infer the appropriate retrodictive probabilities using conventional probabilistic reasoning.

Take this example by Vaidman (arXiv:quant-ph/9807075v1 [section 6]):

"The symmetry aspect of the process of measurement which is important for our discus-
sion is that the process of measurement at time t affects identically forward and backward
evolving states. A possible operational test of this condition is, that the probabilities
for measurements performed immediately after t, given a certain incoming state and no
information from the future, are identical to probabilities for the same measurements per-
formed immediately before t, given the same (complex conjugate) incoming state evolving
backward in time and no information from the past (see erasure procedure described in
previous section)."


Say you have a spin or polarization measurement with outcome probabilities of 1/2 at t using an arbitrary measurement setting and then performing a second measurement of arbitrary setting at t + 1. You clearly get the joint probability of 1/2 cos^2θ. Perform Bayesian inversion, this is then identical in the reverse direction: i.e. t given t + 1 now will have a conditional probability of cos^2θ the same as the conditional probability related to t + 1 given t. The marginal probabilities at both  t and t + 1 are also both 1/2.

What if we regurgitate the scenario but now starting at t - 1 where under the same conditions the marginal probability should also be 1/2. The second measurement is now at t and the exact same Bayesian joint probabilities will be found.

Considering both scenarios together, the probabilities forward from  t and backward from  t will be the same. If i change the measurement setting at t, I then don't need to explain changes to either the forward or backward probabilities with some additional influences retrocausal or otherwise. Bayesian inversion of the joint probabilities for t - 1 and  t always explain the backward-in-time prediction, which is just normal probabilistic reasoning. Once you have chosen a measurement setting at  t such that you have measured a pair of spin states of orthogonal directions, it is impossible from the standpoint of normal Bayesian reasoning to retrodict probabilities at t - 1 that are inconsistent with the regular probabilities that you would expect to measure on those orthogonal spin states forward-in-time at t + 1 using arbitrary choices of measurement setting. The symmetry of the measurement process in the Vaidman quote is fundamentally Bayesian in the manner described by the retrodiction sources in my original post. Aslong as the system doesn't change between measurements then this would be the case for arbitrary times into the future or past.

--------

The next bit isn't directly relevant but I feel like it should be described just because it houses the quantum properties in the same description.

One can note that obviously consecutive spin or polarization measurements are incompatible but this doesn't affect the description above because the marginal probability of 1/2 is an exception that allows to define a regular classical joint probability pairwise for spin measurements. But contextuality does arrive in the above description.

You will see that if you have a definite choice of measurement setting for each of t - 1, t and t + 1, changing the order of the measurement setting at these three times will change the joint probability for the three times, even though the one-time marginal probabilities of 1/2 at each time will always be preserved regardless of ordering. Pairwise joint probabilities for adjacent measurements always have the same functional form of 1/2 cos^2θ.

But if the joint probability related to t - 1 and t have the functional form 1/2 cos^2θ, and the joint probability related to t and t + 1 have the functional form 1/2 cos^2θ, the joint probability related to the orientations at t - 1 and t + 1 found by marginalization will not have this functional form because these measurements are not adjacent - the expected quantum joint probability 1/2 cos^2θ for those specific orientations cannot be found by marginalizing the joint probabilities for this sequence due to the disturbing influence of the measurement at  t. You can only find the pairwise joint distribution for those measurement orientations by marginalization if you change the order of the sequence so those measurement settings are now adjacent, which changes the joint probabilities. All this seems to be analogous to the kind of contextuality related to Bell's theorem but described in terms of a sequence of measurements of arbitrary length instead. Below is an example joint probability distribution (jpd) for three successive measurements with the marginalization properties described above, and then the corresponding jpd2 when iI have just swapped the second and third measurement setting orientations which then changes the three-time joint distribution and also the pairwise joint probabilties you can marginalize. Bit all single-time marginals stay 1/2. The first measurement probability in every single sequence is 1/2, the next two cos^2. On the left hand side are the effective orientations of the measurement results:

JPD 1

0, 70, 50: .051646988
0, 70, -40: .0068419
0, -20, 50: .051646988
0, -20, -40: .389864121


90, 70, 50: .389864121
90, 70, -40: .051646988
90, -20, 50: .0068419
90, -20, -40: .051646988


JPD 2

0, 50, 70: .182421755
0, 50, -20: .0241662
0, -40, 70: .034322689
0, -40, -20: .259089355


90, 50, 70: .259089355
90, 50, -20: .034322689
90, -40, 70: .0241662
90, -40, -20: .182421755

Edit: also note, these probabilities are all going to be off a little bit when summing but I am pretty sure that this is because of the limitations of calculator I was using.

One might also note that the fact that non-adjacent measurement orientations don't marginalize like they should can be seen as the indivisibility in the description.
 
Last edited:
  • #27
iste said:
OK. Now I can read the article by Sutherland.

That Sutherland article does not presume "causality" as part of "locality". Obviously, asserting a combination of causality and retrocausality can be used to completely remove any locality restrictions. That is an outlandishly trite point to make.

I did find pdfs for John Bell's original 1964 paper here and here.
This is from the introduction to Bell's paper:
These additional variables were to restore to the theory causality and locality [2].
The Bell article I read at about that time included Bell's work but also went into "Local Realism" in more detail. But Bell did address both causality and locality.

So, even given that "retrocausality" is compatible with "locality", it is still inconsistent with Bell's Inequality.

In contemporary discussions, locality presumes causality to the exclusion of retrocausality.
For example, the "locality" wiki article repeated uses the term "locality" in statements that require the reader to presume "causality" only.
 
  • #28
iste said:
most people think of the wavefunction as a fundamental physical object so it must be backwards in time causation
The information that is being sent back in time from Alice's measurement to the source, and then forward in time from the source to Bob's measurement, according to the "explanation" we are discussing, is not a wave function. It's Alice's measurement result. Which, as I've already said, does not allow you to determine the wave function at the source.
 
  • #29
PeterDonis said:
The information that is being sent back in time from Alice's measurement to the source, and then forward in time from the source to Bob's measurement, according to the "explanation" we are discussing, is not a wave function. It's Alice's measurement result. Which, as I've already said, does not allow you to determine the wave function at the source.
They are evolving a quantum state backward from the measurement. They are using quantum theory and just exploiting that it seems to work backward just as well as it works forward in time. The states using the backward evolution are just not necessarily the same as the ones forward in time, but this seems to be a property of quantum theory.
 
  • #30
iste said:
They are evolving a quantum state backward from the measurement.
No, they are not. A measurement result is not the same as the quantum state before the measurement. They are "evolving" Alice's measurement result back in time to the source, and then propagating that information forward to Bob's measurement. Nowhere in that process does the actual entangled state of Bob and Alice's photons before measurement appear. In effect they are ignoring all possible results except the one that actually happens. Whatever that is, it is not what you are saying in the quote above.

iste said:
The states using the backward evolution
Are not a "property of quantum theory" at all. They are something the authors of this paper added on themselves. Claims by them to the contrary are simply wrong. As I've already said, I'm not sure the authors of the paper understand what they are actually doing. (This is typical of papers you find on ResearchGate, which often we don't consider as valid references here at all. We're giving you a break by allowing it in this thread.)
 

Similar threads

  • · Replies 31 ·
2
Replies
31
Views
3K
  • · Replies 74 ·
3
Replies
74
Views
4K
  • · Replies 333 ·
12
Replies
333
Views
19K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 35 ·
2
Replies
35
Views
1K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 87 ·
3
Replies
87
Views
10K
  • · Replies 25 ·
Replies
25
Views
13K