I Does Time-Symmetry Imply Retrocausality? How does the Quantum World Say “Maybe”?

  • I
  • Thread starter Thread starter MrRobotoToo
  • Start date Start date
MrRobotoToo
Messages
109
Reaction score
50
I recently came across this paper by philosopher of science Huw Price where he gives an elegantly simple argument for why any realistic interpretation of quantum mechanics which doesn’t incorporate an ontic wave function (which he refers to as ‘Discreteness’) and which is also time-symmetric must necessarily be retrocausal. Here, ‘time-symmetric’ means that the equation of motion is left invariant by the transformation t→-t—it’s basically the requirement that if a process obeys some law when it is run from the past into the future, then it must obey the same law when run from the future into the past. Almost all of the fundamental laws of physics are time-symmetric in this sense, including Newton’s second law, Maxwell’s equations, Einstein’s field equations, and Schrödinger’s equation (I wrote ‘almost’ because the equations that govern the weak nuclear interaction have a slight time asymmetry).

He also wrote a more popular article with his collaborator Ken Wharton where they give a retrocausal explanation of Bell experiments. Retrocausality is able to provide a local hidden variables account of these experiments because it rejects the statistical independence (SI) assumption of Bell’s Theorem. The SI assumption states that there is no correlation between the hidden variable that determines the spins of the entangled pairs of particles and the experimenters’ choices of detector settings, and is also rejected by superdeterminism. The main difference between superdeterminism and retrocausality is that the former presuposses that the correlation is a result of a common cause that lies in the experimenters’ and hidden variable’s shared causal history, whereas the latter assumes that the detector settings have a direct causal influence on the past values of the hidden variable.
 
Physics news on Phys.org
MrRobotoToo said:
I recently came across this paper by philosopher of science Huw Price where he gives an elegantly simple argument for why any realistic interpretation of quantum mechanics which doesn’t incorporate an ontic wave function (which he refers to as ‘Discreteness’) and which is also time-symmetric must necessarily be retrocausal. Here, ‘time-symmetric’ means that the equation of motion is left invariant by the transformation t→-t—it’s basically the requirement that if a process obeys some law when it is run from the past into the future, then it must obey the same law when run from the future into the past. Almost all of the fundamental laws of physics are time-symmetric in this sense, including Newton’s second law, Maxwell’s equations, Einstein’s field equations, and Schrödinger’s equation (I wrote ‘almost’ because the equations that govern the weak nuclear interaction have a slight time asymmetry).

He also wrote a more popular article with his collaborator Ken Wharton where they give a retrocausal explanation of Bell experiments. Retrocausality is able to provide a local hidden variables account of these experiments because it rejects the statistical independence (SI) assumption of Bell’s Theorem. The SI assumption states that there is no correlation between the hidden variable that determines the spins of the entangled pairs of particles and the experimenters’ choices of detector settings, and is also rejected by superdeterminism. The main difference between superdeterminism and retrocausality is that the former presuposses that the correlation is a result of a common cause that lies in the experimenters’ and hidden variable’s shared causal history, whereas the latter assumes that the detector settings have a direct causal influence on the past values of the hidden variable.
The way that I've heard it explained is that to explain phenomena like quantum entanglement you need to sacrifice at least one of three things: causality, locality, or "realism" (although you could sacrifice more than one). But, for example, I don't see why you couldn't have a causal, time-symmetric quantum theory, which is non-local.

Presumably you can't sacrifice realism in a "realistic interpretation", although the word "realistic" may be one that is being used in a different sense in the context of the phrase "realistic interpretation" than it is in the trilemma of causality, locality, and realism.
 
  • Like
Likes MrRobotoToo
ohwilleke said:
The way that I've heard it explained is that to explain phenomena like quantum entanglement you need to sacrifice at least one of three things: causality, locality, or "realism" (although you could sacrifice more than one). But, for example, I don't see why you couldn't have a causal, time-symmetric quantum theory, which is non-local.

Presumably you can't sacrifice realism in a "realistic interpretation", although the word "realistic" may be one that is being used in a different sense in the context of the phrase "realistic interpretation" than it is in the trilemma of causality, locality, and realism.
I suppose that retrocausal interpretations somewhat counterintuitively sacrifice the causality option, in the sense that the value of the hidden variable isn't entirely determined by what lies in its past light cone. This is in contrast to superdeterminism, which also rejects the statistical independence assumption, but where both the hidden variable and the experimenters' choices of detector settings are determined by their causal pasts (leaving alone the whole issue of conspiratorial fine-tuning of initial conditions that's generally acknowledged to be required to make superdeterminism work).
 
MrRobotoToo said:
I suppose that retrocausal interpretations somewhat counterintuitively sacrifice the causality option, in the sense that the value of the hidden variable isn't entirely determined by what lies in its past light cone. This is in contrast to superdeterminism, which also rejects the statistical independence assumption, but where both the hidden variable and the experimenters' choices of detector settings are determined by their causal pasts (leaving alone the whole issue of conspiratorial fine-tuning of initial conditions that's generally acknowledged to be required to make superdeterminism work).
There is so much debate in terminology in quantum interpretations that fixing the right words is complicated. But the point is how do we arrive to Bell's inequality?
I would argue that retrocausality is such a big hack that it destroys the whole Bell proof if accepted. However I think it suffices to say that it is the same as allowing non-locality (as distance-like action is a kind of back in time action in another reference frame).

Personally, if the intention of an interpretation is to show that the quirkiness of quantum mechanics can be easily understood under some interpretation, then retrocasuality fails as it is at least as weird as usual quantum mechanics.
 
  • Like
Likes MrRobotoToo
“I would argue that retrocausality is such a big hack that it destroys the whole Bell proof if accepted.”

Most of the major interpretations break the proof in one way or another: Copenhagen saves locality by sacrificing reality; Bohm does the opposite by sacrificing the former to save the latter; Many-Worlds saves both by sacrificing the uniqueness of measurement outcomes, etc. One has to choose one’s poison when playing the interpretation game😂

“However I think it suffices to say that it is the same as allowing non-locality (as distance-like action is a kind of back in time action in another reference frame).”

Well, in Bell experiments the measurement events always share at least one causal influence lying in the intersection of their past light cones–namely, the source events that produce the entangled pairs of particles.
 
pines-demon said:
it destroys the whole Bell proof if accepted.
We already know Nature violates some premise of the Bell proof, because Nature violates the Bell inequalities.
 
  • Like
Likes Nugatory, ohwilleke, SergejMaterov and 1 other person
MrRobotoToo said:
“I would argue that retrocausality is such a big hack that it destroys the whole Bell proof if accepted.”

Most of the major interpretations break the proof in one way or another: Copenhagen saves locality by sacrificing reality; Bohm does the opposite by sacrificing the former to save the latter; Many-Worlds saves both by sacrificing the uniqueness of measurement outcomes, etc. One has to choose one’s poison when playing the interpretation game😂
I agree I'm just saying would not choose the retro-causality poison over the others.
MrRobotoToo said:
“However I think it suffices to say that it is the same as allowing non-locality (as distance-like action is a kind of back in time action in another reference frame).”

Well, in Bell experiments the measurement events always share at least one causal influence lying in the intersection of their past light cones–namely, the source events that produce the entangled pairs of particles.
Sure in retro-causality you throw all that away so violations of Bell inequality follow with little issue.
PeterDonis said:
We already know Nature violates some premise of the Bell proof, because Nature violates the Bell inequalities.
Never said otherwise. I just saying that I prefer not to throw the causality baby out with the bathwater.
 
MrRobotoToo said:
Well, in Bell experiments the measurement events always share at least one causal influence lying in the intersection of their past light cones–namely, the source events that produce the entangled pairs of particles.
Whoops! :smile:

You are overlooking numerous experiments in which photons are entangled which have never existed in a common light cone. They are produced by independent sources. This is accomplished via remote entanglement swapping. There is no intersection of common light cones. The decision to entangle such photons- or not - can be freely made by the experimenter, who can also do that remotely.

Such decision can even be made *after* the entangled photons were measured. That wrecks normal (Einsteinian) concepts of causality. See for example the following paper by a team led by Nobel laureate Anton Zeilinger:

Experimental delayed-choice entanglement swapping (2012)
Xiao-song Ma, Stefan Zotter, Johannes Kofler, Rupert Ursin, Thomas Jennewein, Časlav Brukner, Anton Zeilinger

"If Victor projects his two photons onto an entangled state, Alice’s and Bob’s photons are entangled although they have never interacted or shared any common past. ... In [Peres'] gedanken experiment, Victor is free to choose either to project his two photons onto an entangled state and thus project Alice’s and Bob’s photons onto an entangled state, or to measure them individually and then project Alice’s and Bob’s photons onto a separable state. In order to experimentally realize Peres’ gedanken experiment, we place Victor’s choice and measurement in the time-like future of Alice’s and Bob’s measurements, providing a “delayed-choice” configuration in any and all reference frames."
 
Last edited:
  • Wow
Likes MrRobotoToo
DrChinese said:
Whoops! :smile:

You are overlooking numerous experiments in which photons are entangled which have never existed in a common light cone. They are produced by independent sources. This is accomplished via remote entanglement swapping. There is no intersection of common light cones. The decision to entangle such photons- or not - can be freely made by the experimenter, who can also do that remotely.

Such decision can even be made *after* the entangled photons were measured. That wrecks normal (Einsteinian) concepts of causality. See for example the following paper by a team led by Nobel laureate Anton Zeilinger:

Experimental delayed-choice entanglement swapping (2012)
Xiao-song Ma, Stefan Zotter, Johannes Kofler, Rupert Ursin, Thomas Jennewein, Časlav Brukner, Anton Zeilinger

"If Victor projects his two photons onto an entangled state, Alice’s and Bob’s photons are entangled although they have never interacted or shared any common past. ... In [Peres'] gedanken experiment, Victor is free to choose either to project his two photons onto an entangled state and thus project Alice’s and Bob’s photons onto an entangled state, or to measure them individually and then project Alice’s and Bob’s photons onto a separable state. In order to experimentally realize Peres’ gedanken experiment, we place Victor’s choice and measurement in the time-like future of Alice’s and Bob’s measurements, providing a “delayed-choice” configuration in any and all reference frames."
Price and Wharton have written a paper about these experiments as well. In the appendix they provide a (statistical) collider analysis where they describe retrocausal influences acting across the 'W zig-zags' created by the past light cones of the experiment's components.
 
  • #10
MrRobotoToo said:
Price and Wharton have written a paper about these experiments as well. In the appendix they provide a (statistical) collider analysis where they describe retrocausal influences acting across the 'W zig-zags' created by the past light cones of the experiment's components.
Whoops! :smile:

They most certainly do nothing of the kind. There’s no connection with entanglement swapping ("W") of the type in the experiment referenced. They merely claim such an explanation without any substantive proof of any kind. Please note that their paper is not generally accepted. As much as I admire their work (especially on retrocausality), their paper and their explanation have no significance at all in this matter.

I am quite familiar with their hypothesis about the Collider Loophole (CL). This is not a well accepted concept even in classical statistics, although it can be demonstrated in contrived examples. De Raedt et al use other such contrived examples as well to make attempts to tear down nonlocal explanations of QM. However, only examples that are directly applicable to cited experiments have any value when it comes to Quantum Mechanics.

For example, from the Price/Wharton paper:

"The sensitivity of CL to the spacetime location of the central vertex depends on the assumption that there is no retrocausality in the systems in question." [Please refer to the title/subject of this thread to see how inappropriate this statement is for reference here.]

"This immediately puts on the table the possibility that any apparent AAD between A and B might be a manifestation of selection bias." [This statement has no particular support, it's merely their speculation. Actual theoretical predictions, supported cited experiment, consider and exclude this possibility.]

"These toy examples suggest that in any real experiment in which apparent AAD
is an artifact of collider bias...
" [A toy example is not an actual example, is it?]

An important consideration, usually ignored in attacks on nonlocality in QM, is that the following are BOTH requirements of any attempts to describe a local explanation of entanglement swapping:

1) Explain Bell inequality violations. The hypothesis of a "collider loophole" attempts to address this.
2) Explain perfect correlations. The hypothesis of a "collider loophole" does NOT address this - nor does the mention of "post-selection" change anything. (Note that in ALL scientific experiments, there is post-selection. It's part of the scientific method.) The cited experiment of Ma et al involves perfect corrections (ideal case of course).

So... my citation stands as is. If you can demonstrate where one of the authors of the paper say something like "yes, Price/Wharton's paper invalidates our results" then please pass that on. :smile:
 
Last edited:
  • Wow
  • Like
Likes PeroK and MrRobotoToo
  • #11
But isn't it straightforward to construct a retrocausal hidden variable theory that reproduces the quantum predictions of delayed-choice entanglement swapping? In this framework, the hidden variables depend not only on past events but also on future measurement choices—namely, those of Victor, the third experimenter.

Let ##\alpha_j## and ##\beta_j## be the measurement settings of Alice and Bob, respectively, and let ##\theta_j## be the angle between these settings. Let ##A_j## be a random sequence of values ##\pm 1##, representing potential outcomes for Alice’s measurements.

Now introduce Victor, who chooses at the last moment between performing a Bell state measurement (BSM) (projecting onto an entangled basis) or a separable state measurement on the central particle pair (particles 2 and 3). Denote this choice by ##V_j##, which may take values "entangled" or "separable".

Here’s the retrocausal twist: We posit that the hidden variable ##\lambda_j## for each entangled pair (1–2 and 3–4) depends on Victor’s future choice ##V_j##. Specifically, define ##B_j## (the outcome for Bob) as a probabilistic function of ##A_j##, ##\theta_j##, and ##V_j##, such that:
  • If ##V_j = \text{“entangled”}##, then ##B_j = A_j## with probability ##\sin^2(\theta_j/2)##, and ##B_j = -A_j## with the complementary probability.
  • If ##V_j = \text{“separable”}##, then ##B_j## is independent of ##A_j##, reflecting the lack of entanglement correlation.
The key idea is that the hidden variables “know” about Victor’s future measurement choice and adapt accordingly. This retrocausality allows the model to match quantum mechanical predictions, even when Victor’s choice is made after Alice and Bob have already recorded their results.

Thus, the hidden variable ##\lambda_j## comprises ##A_j##, ##V_j##, and a probabilistic rule for generating ##B_j## conditional on both ##\theta_j## and ##V_j##.

Then, if Alice and Bob’s detector settings match ##\alpha_j## and ##\beta_j##, and if the hidden variables are retrocausally influenced by ##V_j##, the measurement statistics will agree with those predicted by QM.

The hard part, of course, is justifying how nature “knows” the future measurement choice. But within a retrocausal framework, this is permitted: causal influence is allowed to propagate backward from the future measurement event to the past preparation stage.
 
Last edited:
  • Like
Likes DrChinese
  • #12
DrChinese said:
Whoops! :smile:

You are overlooking numerous experiments in which photons are entangled which have never existed in a common light cone. They are produced by independent sources. This is accomplished via remote entanglement swapping. There is no intersection of common light cones. The decision to entangle such photons- or not - can be freely made by the experimenter, who can also do that remotely.
I don't think that this is a good description of what is going on in those experiments. Entanglement, at a minimum requires an ultimate shared light-cone.
 
  • #13
ohwilleke said:
I don't think that this is a good description of what is going on in those experiments. Entanglement, at a minimum requires an ultimate shared light-cone.
Well, all experiments originated on the planet Earth (or the universe for that matter) technically exist in a common light cone. If that is your standard for determining a "local" interaction, then everything is local. But that is not the normal scientific definition of a "common" light cone. Numerous experiments using the Entanglement Swapping protocol feature photons from independent sources which never exist in a common light cone.

High-fidelity entanglement swapping with fully independent sources (2008)

"Entanglement swapping allows to establish entanglement between independent particles that never interacted nor share any common past. ... We have demonstrated high-fidelity entanglement swapping with time-synchronized independent sources."

This paper was also written by a team led by Nobel laureate Anton Zeilinger. Of course, maybe they don't know what they are talking about. :smile:
 
  • Skeptical
Likes ohwilleke
  • #14
MrRobotoToo said:
But isn't it straightforward to construct a retrocausal hidden variable theory that reproduces the quantum predictions of delayed-choice entanglement swapping? In this framework, the hidden variables depend not only on past events but also on future measurement choices—namely, those of Victor, the third experimenter.

...


Here’s the retrocausal twist: We posit that the hidden variable ##\lambda_j## for each entangled pair (1–2 and 3–4) depends on Victor’s future choice ##V_j##. Specifically, define ##B_j## (the outcome for Bob) as a probabilistic function of ##A_j##, ##\theta_j##, and ##V_j##, such that:
  • If ##V_j = \text{“entangled”}##, then ##B_j = A_j## with probability ##\sin^2(\theta_j/2)##, and ##B_j = -A_j## with the complementary probability.
  • If ##V_j = \text{“separable”}##, then ##B_j## is independent of ##A_j##, reflecting the lack of entanglement correlation.
The key idea is that the hidden variables “know” about Victor’s future measurement choice and adapt accordingly. This retrocausality allows the model to match quantum mechanical predictions, even when Victor’s choice is made after Alice and Bob have already recorded their results.

...

The hard part, of course, is justifying how nature “knows” the future measurement choice. But within a retrocausal framework, this is permitted: causal influence is allowed to propagate backward from the future measurement event to the past preparation stage.

Great question! I'm not sure I can answer it.

But generally I agree that some kind of mechanism might exist if you bring retrocausality into the equation. I would say that personally I don't see how we could introduce hidden variables that would uniquely determine quantum outcomes. So in my view, there is still the usual quantum indeterminacy. But I believe it is difficult to disregard the basic fact that the future context must be known to provide the best predictions for experiments using remote entanglement swapping. (You could even define things to call that retrocausal right there. Or in the view of @RUTA this is "acausal" rather than "retrocausal".)

There is of course the question: "What is straight-forward?" It turns out that any proposed retrocausal mechanism must be substantially more complex than it might initially appear. Let's suppose we have the 4 photons (1/2/3/4) where we entangle 1 & 4 remotely (no direct interaction) and they come from initially entangled pairs 1&2 and 3&4 from independent sources. Alice measures 1, Bob measures 4, and Victor measures 2 & 3 in such a way as to cause a swap (or not, depending on his choice). This follows the Zeilinger references: Experimental delayed-choice entanglement swapping (2012)

1. Now, the 1 & 4 photons have no relationship whatsoever - even retrocausally - unless Victor makes the choice to execute a swap on them. And in fact, there could exist another entangled pair from some other source - photons 5&6 let's say - that could potentially be candidates for swapping with 1&2. Victor could execute a swap on 2 & 5. In that case, you'd be entangling 1 & 6 instead of 1 & 4, right? And of course Victor can execute the swap at any time - before measurements by Alice and Bob, or well after, and/or local or distant. So Victor can choose to do any of those without any observable change to the recorded results.

2. When a swap occurs, Victor entangles 1 & 4 in one of 4 Bell states. Which one is selected occurs randomly as far as anyone knows, and the specific Bell state that occurs is NOT under Victor's control. But there are definitely at least 4 variations when the conditions are ripe for a swap. (Note that I am skipping over a lot of detail about Bell states that is described more fully in the reference.) When a swap does occur, keep in mind the physical requirements per the reference:

a. Photons 2 & 3 must not be distinguishable.
b. Photons 2 & 3 must overlap in a beam splitter (BS), but NOT a polarizing beam splitter (PBS).
c. The outputs of the BS occur randomly (transmit or reflect).
d. Photons 2 & 3 must be measured as to polarization by a PBS (separate from the BS) and those occur randomly.
e. And very importantly: The measurement of polarization by Victor is usually on an unbiased basis relative to the measurements of Alice and Bob.

The importance of this is as follows: for our retrocausal hidden variables to make sense, there must exist a heretofore unknown connection between canonically conjugate spin observables of quantum particles. For photons: The must be some hidden variable connection between the H/V polarizations and the +/- polarizations. How else to explain what occurs, since that is all that we see in experiments? Alice and Victor measure observables that don't commute!

If they don't commute, but there is a hypothetical connection we seek to employ to explain entanglement swapping... well now we have really stepped into it. We are rejecting a fundamental element of the quantum world. So... I'm concerned that the retrocausal hidden variables solution has issues that are not as satisfying as I would hope. :smile:
 
Last edited:
  • Like
Likes MrRobotoToo
  • #15
MrRobotoToo said:
Copenhagen saves locality by sacrificing reality;
I don't think so, because there is no reality in quantum mechanics.
 
  • Like
  • Skeptical
Likes ohwilleke and MrRobotoToo
  • #16
martinbn said:
I don't think so, because there is no reality in quantum mechanics.
But isn’t that interpretation-dependent? Although it may be the case that the experimental data are underdetermined with respect to the various realist interpretations of QM, that doesn't mean that there is no reality behind them--just that our access to it may forever be withheld. In the absence of any experimental means of adjudicating between the interpretations, an instrumentalist approach to the subject will have to suffice in pedagogical and research settings, but it’s a huge leap from there to declare that QM proves that there’s no reality.
 
Last edited:
  • #17
DrChinese said:
Well, all experiments originated on the planet Earth (or the universe for that matter) technically exist in a common light cone. If that is your standard for determining a "local" interaction, then everything is local. But that is not the normal scientific definition of a "common" light cone. Numerous experiments using the Entanglement Swapping protocol feature photons from independent sources which never exist in a common light cone.

High-fidelity entanglement swapping with fully independent sources (2008)

"Entanglement swapping allows to establish entanglement between independent particles that never interacted nor share any common past. ... We have demonstrated high-fidelity entanglement swapping with time-synchronized independent sources."

This paper was also written by a team led by Nobel laureate Anton Zeilinger. Of course, maybe they don't know what they are talking about. :smile:
I'm not disagreeing that entanglement swapping works.

I'm disagreeing with the concept that entanglement swapping doesn't involve a common light-cone (which goes both forward and backward from a point in space-time), albeit, more elaborately than most.

Wikipedia explains what entanglement swapping is (image from the same source) as follows:

Screenshot 2025-07-24 at 2.21.06 PM.webp

Entanglement swapping has two pairs of entangled particles: (A, B) and (C, D). Pair of particles (A, B) is initially entangled, as is the pair (C, D). The pair (B, C) taken from the original pairs, is projected onto one of the four possible Bell states, a process called a Bell state measurement. The unmeasured pair of particles (A, D) can become entangled. This effect happens without any previous direct interaction between particles A and D.

A and B are in the same light cone when their entanglement is created. C and D are in the same light cone when their entanglement is created. B and C and in the same light cone when they conduct the entanglement swap.

An entanglement swap basically requires three contact interactions, each corresponding to a light-cone, instead of just one in a normal entanglement scenario. A, B, C, and D all have to be within the backward in time looking BC light-cone at the points at which entanglement occurs.

Pros and Cons of Different Interpretation of QM:


Retrocausality is certainly the more intuitive way to understand it, IMHO, with A and D linked through a chain that goes both forward and backward in time from one to the other.

But, it isn't at all obvious that you couldn't achieve the same result through non-locality, with the secondary entanglement of BC "communicated" non-locally to A and D, at the time of the secondary entanglement (assuming for sake of argument that simultaneity can be defined in some meaningful way between non-local particles which is, itself, a serious problem), without true retrocasuality, and a non-local interpretation is less obviously susceptible to paradoxes.

Part of why a non-local interpretation makes less intuitive/heuristic sense is that in a non-local interpretation, why should it matter that the particles are connected by a chain of contact interactions?

But, the reason for this is manifestly obvious in a retrocasual interpretation of quantum entanglement.

On the other hand, while retrocausality is one way to explain quantum entanglement, it cannot explain some other quantum phenomena that are impossible if the classical axioms of causality, locality, and reality all hold true. So, if you use retrocausality to explain entanglement, you have to sacrifice at least one more of these axioms to explain all of the phenomena in quantum physics that are impossible in classical physics.

For example, retrocausality can't be used to explain quantum tunneling (i.e. violations of mass-energy conservation in intermediate steps of an interaction that are not observed and that are restored in the final state of an interaction which is measured). You need to sacrifice locality and/or reality for that.

But, if you rely on violations of the locality and/or reality axioms to explain why quantum entanglement is possible, you can have an interpretation of quantum mechanics that is fully causal. A fully causal interpretation of quantum mechanics is particularly attractive to have in an interpretation of quantum mechanics if you want to extend quantum mechanics to include certain quantum gravity theories.
 
Last edited:
  • Like
Likes MrRobotoToo
  • #18
ohwilleke said:
I'm not disagreeing that entanglement swapping works. I'm disagreeing with the concept that entanglement swapping doesn't involve a common light-cone (which goes both forward and backward from a point in space-time), albeit, more elaborately than most.

Wikipedia explains what entanglement swapping is (image from the same source) as follows (showing ABCD):

View attachment 363676

1. A and B are in the same [forward] light cone when their entanglement is created. C and D are in the same [forward] light cone when their entanglement is created. B and C and in the same [backward] light cone when they conduct the entanglement swap.

2. An entanglement swap basically requires three contact interactions, each corresponding to a light-cone, instead of just one in a normal entanglement scenario. A, B, C, and D all have to be within the backward in time looking BC light-cone at the points at which entanglement occurs.

3. Retrocausality is certainly the more intuitive way to understand it, IMHO, with A and D linked through a chain that goes both forward and backward in time from one to the other.

4. But, it isn't at all obvious that you couldn't achieve the same result through non-locality, with the secondary entanglement of BC "communicated" non-locally to A and D, at the time of the secondary entanglement (assuming for sake of argument that simultaneity can be defined in some meaningful way between non-local particles), without true retrocasuality, and a non-local interpretation is less obviously susceptible to paradoxes.

5. Part of why a non-local interpretation makes less intuitive/heuristic sense is that in a non-local interpretation, why should it matter that the particles are connected by a chain of contact interactions? But the reason for this is manifestly obvious in a retrocasual interpretation.
1. Note my added comments in brackets.

2. The A & D measurements can occur before the BC swap in all reference frames (or can occur after in all reference frames). As is already well known with Bell tests, the A & D measurement settings can be selected midflight such that no lightspeed communication can occur between your contact points. So while it is generally true that the ABCD creation occurs in the backward light cone of the BC swap, that is a useless point here. It is the idea that A and D are perfectly correlated at any same angle settings selected for measurement that is our issue to solve, and those can occur in additional spacetime points. (Also, the A photon can cease to exist before the D photon is even created.)

3. Agreed as to this idea as having some intuitive benefits.

4. Agreed that nonlocality (at least in the sense of simply communicating forward in time faster than c from one point to another) doesn't seem to apply to these experiments.

5. Agreed to this basic idea as well.

So what is communicating with what, even in the retrocausal variation? I struggle with this. *

I like the basic idea that you can follow the zigzag path in both time directions. (Price and Wharton call it a "W" type, as opposed to "V" type.) And in fact all variations on Entanglement Swapping feature some variation on this connection. It does explain why there are some limits to the space time locations involved. But the confusing part (to me) is that there is certainly no preferred direction - nothing you could call a "transmitter" and nothing to call a "receiver". (Except of course if we label those arbitrarily.)

But... I don't see the overarching single light cone you do. a) Photons A and D can exist in spacetime regions that are completely discrete during their respective existences. b) The swap of photons B and C is dependent on them physically overlapping indistinguishably in a remote beam splitter, followed by polarization measurements. Those operations can be performed anywhere and at any time, and in fact there is no relationship between B and C prior to the swap (so the experimenter could conceivably select to execute a swap with any entangled photon that is made to arrive within the coincidence time window).

(Of course I ignore the fact that all measurement outcomes must be communicated with classical messaging, since we don't otherwise have an FTL mechanism to send signals.)


*In my imagination, I envision something a la a mixture of the path integral concept; perhaps combined with some version of a backwards-in-time offer wave from the future**; with one of the many paths randomly becoming the "selected" one we observe. Of course this would apply to the entire context (initial and final) in Entanglement Swapping, and would appear to have nonlocal attributes despite nothing exceeding c at any point.

**Example being the Two State Vector Formalism. Which would be a form of retrocausality.
 
  • #19
DrChinese said:
You are overlooking numerous experiments in which photons are entangled which have never existed in a common light cone. They are produced by independent sources. This is accomplished via remote entanglement swapping. There is no intersection of common light cones. The decision to entangle such photons- or not - can be freely made by the experimenter, who can also do that remotely.
To invoke retrocausality in the manner it is invoked by Price + Wharton in the OP, all that is necessary is the preparation events of the photons both lie in the past light cone of the Bell-state measurement. So if we entertain retrocausality, it means we can always interpret the Bell-state measurement outcome as a collider for the influences of Alice's and Bob's measurements. This is the case even for exotic "never co-existing" variants of entanglement swapping.
 
  • Like
Likes ohwilleke
  • #20
Morbert said:
1. To invoke retrocausality in the manner it is invoked by Price + Wharton in the OP, all that is necessary is the preparation events of the photons both lie in the past light cone of the Bell-state measurement. So if we entertain retrocausality, it means we can always interpret the Bell-state measurement outcome as a collider for the influences of Alice's and Bob's measurements.

Morbert said:
2. This is the case even for exotic "never co-existing" variants of entanglement swapping.
1. Just pointing out, from the Price/Wharton paper:

"The sensitivity of CL to the spacetime location of the central vertex [Bell State Measurement] depends on the assumption that there is no retrocausality in the systems in question."

I guess that could be restated as: “If the system is not sensitive to the location of the Bell State Measurement, then there is retrocausality.” Not sure I have the contra-negative quite right, or if the result makes sense if I do. Obviously the BSM can be performed remotely, and either before or after the measurements of Alice and Bob.

2. Agreed.


I will note again: the Collider statistical hypothesis cannot be applied to predictions of perfect correlation. And even for statistical correlations (a la Bell), Price and Wharton have provided no direct examples of how it applies here. They simply claim there is a parallel.

On the other hand: the Ma et al paper demonstrates that, in line with quantum theory, the choices/actions of the experimenter to execute an entangled measurement or a separable one controls the results. A direct causal relationship is identified based on controlled physical differences.
 
  • #21
DrChinese said:
1. Just pointing out, from the Price/Wharton paper:

"The sensitivity of CL to the spacetime location of the central vertex [Bell State Measurement] depends on the assumption that there is no retrocausality in the systems in question."

I guess that could be restated as: “If the system is not sensitive to the location of the Bell State Measurement, then there is retrocausality.” Not sure I have the contra-negative quite right, or if the result makes sense if I do. Obviously the BSM can be performed remotely, and either before or after the measurements of Alice and Bob.
It would be: If we assume retrocausality is possible, then there is no sensitivity of the CL to the spacetime location of the central vertex.
DrChinese said:
I will note again: the Collider statistical hypothesis cannot be applied to predictions of perfect correlation. And even for statistical correlations (a la Bell), Price and Wharton have provided no direct examples of how it applies here. They simply claim there is a parallel.

On the other hand: the Ma et al paper demonstrates that, in line with quantum theory, the choices/actions of the experimenter to execute an entangled measurement or a separable one controls the results. A direct causal relationship is identified based on controlled physical differences.
In scenarios where the CL applies, we would say the perfect correlation events registered by Alice and Bob influence Charles's apparatus such that, if he chooses to perform a BSM, he will only get the appropriate Bell state/states (depending on his particular BSM apparatus).
 
  • #22
Last edited by a moderator:
  • Like
Likes DrChinese
  • #23
Morbert said:
In scenarios where the CL applies, we would say the perfect correlation events registered by Alice and Bob influence Charles's apparatus such that, if he chooses to perform a BSM, he will only get the appropriate Bell state/states (depending on his particular BSM apparatus).
My point is that the CL hypothesis is completely invented and has no applicability whatsoever to this quantum scenario. There isn't even a classical scenario where such is possible for perfect correlations.

Usually, the classical idea is that on alternate Wednesdays, sick patients are diagnosed with cancer in France but not in Italy. Or whatever, you fill in the details.

But none of that works for Entanglement Swapping experiments. If the experimenter chooses to perform an entangled measurement, there are the usual entangled state statistics. If the experimenter chooses separable state measurements, there are separable state statistics (no correlation at all). Cause and effect, simple. No colliders influencing anything. Just a blank assertion that it's a factor.
 
Last edited:
  • #24
DrChinese said:
If the experimenter chooses to perform an entangled measurement, there are the usual entangled state statistics. If the experimenter chooses separable state measurements, there are separable state statistics (no correlation at all).
We can construct similar statements for classical systems. In the past I have used Bertlemann's socks scenarios. Price + Wharton use a rock-paper-scissors scenario:

"Here’s a simpler example, to lead us in the direction of quantum cases. Imagine that Alice and Bob are at spacelike separation, and play rock-paper-scissors with each other, sending their choices to a third observer, Charlie. Suppose that Alice and Bob make their choices entirely at random, and that Charlie records three kinds of outcomes: Alice wins, Bob wins, or neither wins. Obviously, post-selecting on any one of these outcomes induces a correlation between Alice’s choices and Bob’s choices. Equally obviously, this does not amount to real causality between Alice and Bob." -- Price + Wharton

Using your phrasing: "If Charlie records victory outcomes, there are correlated state statistics. If Charlie instead records choice of rock paper or scissors, there will be no correlation at all."

This is not enough to infer a causal relation. Quantum systems will permit deeper correlations, forbidden by classical systems, but this is already the case in conventional entanglement experiments. That is the work the CL does, it sets certain configurations of entanglement swapping experiments on equal footing with conventional entanglement experiments re/ their interpretations. Specifically, experiments where Charles's measurement is in the future light cone of Alice's and Bob's, or interpretations where retrocausal relations are entertained.

I.e. The CL is not asserted as an interpretation-independent fact. It is instead invoked under appropriate interpretations in understanding entanglement swapping experiments and how they relate to conventional entanglement experriments.
 
  • Like
Likes martinbn
  • #25
Morbert said:
We can construct similar statements for classical systems. In the past I have used Bertlemann's socks scenarios. Price + Wharton use a rock-paper-scissors scenario:

"Here’s a simpler example, to lead us in the direction of quantum cases. Imagine that Alice and Bob are at spacelike separation, and play rock-paper-scissors with each other, sending their choices to a third observer, Charlie. Suppose that Alice and Bob make their choices entirely at random, and that Charlie records three kinds of outcomes: Alice wins, Bob wins, or neither wins. Obviously, post-selecting on any one of these outcomes induces a correlation between Alice’s choices and Bob’s choices. Equally obviously, this does not amount to real causality between Alice and Bob." -- Price + Wharton

Using your phrasing: "If Charlie records victory outcomes, there are correlated state statistics. If Charlie instead records choice of rock paper or scissors, there will be no correlation at all."
And I said: “Usually, the classical idea is that on alternate Wednesdays, sick patients are diagnosed with cancer in France but not in Italy. Or whatever, you fill in the details... [as you have].

Note there is absolutely no correspondence in your example to Entanglement Swapping ES.

First, there needs to be something corresponding to a Bell State Measurement BSM … which is performed on a completely different unbiased basis relative to the outcomes seen by Alice and Bob. Second, Charlie can perform his measurements either before or after Alice and Bob, something that cannot happen in the cited example. Third, Charlie/Victor reports his results, and the tabulation of the full set of results including Alice and Bob is presented. Lastly, results show that when the BSM is performed, there is correlation; else there is none, although the signature is the same. This demonstrates ES cause and effect.

Price/Wharton do not subject their examples to sufficient critical examination. Keep in mind I am a longtime fan and follower of their work, but am completely confused (baffled actually) by their assertion about the collider loophole CL and ES. There are no classical narratives that are parallel to ES.
 
Last edited:
  • Like
Likes PeterDonis
  • #26
Morbert said:
We can construct similar statements for classical systems.
Not if you want to explain Bell inequality violations. None of the classical examples can do that. Indeed, since you mentioned Bertlmann's socks, Bell used that explicitly as an example of a classical case that couldn't explain violations of his inequalities (which are violated in quantum experiments).
 
  • Like
Likes DrChinese
  • #27
PeterDonis said:
Indeed, since you mentioned Bertlmann's socks, Bell used that explicitly as an example of a classical case that couldn't explain violations of his inequalities (which are violated in quantum experiments).
Bertlmann's Socks And The Nature of Reality

Almost every attempt to provide some kind of narrative explanation of Entanglement and/or Entanglement Swapping falls victim to a similar mistake: it describes a) the perfect correlation scenario (as do Bertlmann socks type narratives); or b) it describes correlations as coincidences/spurious.

Importantly: they can't do both a) and b) with a single narrative. The retrocausal approach, to the extent it does either, does do it for both. Or at least it has that potential, if fully fleshed out.
 
  • Like
Likes PeterDonis
  • #28
DrChinese said:
First, there needs to be something corresponding to a Bell State Measurement BSM … which is performed on a completely different unbiased basis relative to the outcomes seen by Alice and Bob.
PeterDonis said:
Not if you want to explain Bell inequality violations. None of the classical examples can do that. Indeed, since you mentioned Bertlmann's socks, Bell used that explicitly as an example of a classical case that couldn't explain violations of his inequalities (which are violated in quantum experiments).
Morbert said:
Quantum systems will permit deeper correlations, forbidden by classical systems, but this is already the case in conventional entanglement experiments.
Classical examples are not used to explain Bell inequality violations. They are instead used to draw analogies in causal relations: In scenarios where the collider loophole applies, whether the systems are classical or quantum, correlations in subsets of Alice's and Bob's outcomes selected by Charles's outcomes can be interpreted as post-selection effects. This does not explain why quantum systems permit deeper correlations than classical systems. But this distinction between permitted correlations in quantum and classical systems is already present in standard EPRB entanglement experiments.

I.e. The collider loophole places the appropriate entanglement swapping experiments on the same causal footing as conventional entanglement experiments, and hence if an interpretation accounts for conventional entanglement without invoking "action at a distance" (Price + Wharton), it will similarly account for certain entanglement swapping experiments.
DrChinese said:
Second, Charlie can perform his measurements either before or after Alice and Bob, something that cannot happen in the cited example.
The CL is insensitive to the spacetime location of the central vertex only if retrocausality is supposed. None of this contradicts Price + Wharton's careful use of the CL.
DrChinese said:
Third, Charlie/Victor reports his results, and the tabulation of the full set of results including Alice and Bob is presented. Lastly, results show that when the BSM is performed, there is correlation; else there is none, although the signature is the same. This demonstrates ES cause and effect.
I've hashed out the problems with your understanding of the relation between BSM and SSM signatures in many previous threads, and doing so once again would only bring this thread off course.
 
  • Like
Likes martinbn
  • #29
Morbert said:
Classical examples are not used to explain Bell inequality violations. They are instead used to draw analogies in causal relations
And such analogies are useless if they can't explain the actual experimental results. Which they can't if they use classical systems that can't violate the Bell inequalities.
 
  • Like
Likes DrChinese
  • #30
Morbert said:
the collider loophole
I think we're just going to have to disagree about whether this is a valid concept or not.
 
  • #31
Morbert said:
1. In scenarios where the collider loophole applies, whether the systems are classical or quantum, correlations in subsets of Alice's and Bob's outcomes selected by Charles's outcomes can be interpreted as post-selection effects.

2. I.e. The collider loophole places the appropriate entanglement swapping experiments on the same causal footing as conventional entanglement experiments, and hence if an interpretation accounts for conventional entanglement without invoking "action at a distance" (Price + Wharton), it will similarly account for certain entanglement swapping experiments.

3. The CL is insensitive to the spacetime location of the central vertex only if retrocausality is supposed. None of this contradicts Price + Wharton's careful use of the CL.

4. I've hashed out the problems with your understanding of the relation between BSM and SSM signatures in many previous threads, and doing so once again would only bring this thread off course.
1. There is no science to support this statement. Correlating outcomes from Alice, Bob and Charlie is no more "postselection" than any experiment of any type anywhere. In entanglement experiments: the design of the experiment is performed beforehand, and follows established theory. The scientists are testing a specific hypothesis. That being whether the experimenter's choice of a BSM (leads to entangled statistics) or SSM (leads to product statistics).

2. There is no science to support this statement either. It is completely ad hoc as a way to avoid addressing the issue. Entanglement swapping involves scientific issues that are quote different than for tradition PDC entangled systems. For example: a) swapping involves entanglement from different distant sources; b) indistinguishability plays a vital role during the BSM; c) the BSM can occur either before or after the measurements of Alice and Bob; and d) the BSM is performed on an unbiased basis relative to those of Alice and Bob.

3. None of this makes any sense (to me). If the CL existed, no assumptions are needed since the purpose of claiming the CL is to provide the groundwork for a local causal (Einsteinian) explanation - which is not retrocausal. To match experiment, the CL must be insensitive to location/causal ordering. On the other hand: if you accept retrocausality, then what purpose is the CL - since there is no longer a loophole at all.

4. The signature buckets are exactly the same in BSM/SSM experiments such as Megidish et al (see Fig. 3) but yes, slightly different (although of no significance) in Ma et al as you have argued. I agree there is little point of diverting this discussion, especially since you refuse to acknowledge important and well-accepted experimental science.

-DrC
 
Last edited:
  • #32
From Wharton and Argaman, discussion of retrocausal models. This is more what I am familiar with from Ken, and I think this is a good paper.

Bell's Theorem and Locally-Mediated Reformulations of Quantum Mechanics

"These locally-mediated models require the relaxation of an arrow-of-time assumption which is typically taken for granted. Specifically, some of the mediating parameters in these models must functionally depend on measurement settings in their future, i.e., on input parameters associated with later times. This option (often called "retrocausal") has been repeatedly pointed out in the literature... A brief survey of such models is included here. These models provide a continuous and consistent description of events associated with spacetime locations, with aspects that are solved "all-at-once" rather than unfolding from the past to the future. The tension between quantum mechanics and relativity which is usually associated with Bell's Theorem does not occur here."

But the Collider Loophole concept is a bust. It is not generally accepted science, and virtually every reference to Entanglement "Collider Loophole" in Google is to the same handful of papers by Price, Wharton and/or Mjelva.
 
  • #33
Hi everyone!

A new paper by Wharton & Price was submitted to ArXiv ten days ago (https://arxiv.org/abs/2507.15128). There, they discuss entanglement as a kind of preselection due to the preparation of an initial state.

I'm start to think that collider loophole (CL) deserves a thread of its own :smile:

Lucas.
 
  • Like
Likes DrChinese and MrRobotoToo
  • #34
DrChinese said:
But the Collider Loophole concept is a bust.
The first time I read about collider loophole, I had the same reaction. Now, while when I'm not a "fan" of this idea, I have an opinion more similar to what @Morbert says here:
Morbert said:
The CL is not asserted as an interpretation-independent fact. It is instead invoked under appropriate interpretations in understanding entanglement swapping experiments and how they relate to conventional entanglement experriments.
In other words, it may be a useful concept within certain interpretations, such as those based largely on information, where a "mechanism" to explain entanglement is usually denied.

DrChinese said:
(...) virtually every reference to Entanglement "Collider Loophole" in Google is to the same handful of papers by Price, Wharton and/or Mjelva.
Well, maybe the rest of the world doesn't think that collider loophole is so useful after all...

Lucas.
 
  • #35
  • Like
Likes Sambuco, MrRobotoToo and DrChinese
  • #36
As an aside, I'd like to mention something I think we've already discussed, but which is relevant to the current discussion. The "entanglement" in entanglement swapping, and especially in the delayed-choice version (https://arxiv.org/abs/1203.4834) is not an interpretation-independent fact about the quantum state of Alice and Bob's particles. For well-known ##\Psi##-ontic interpretations, such as Everettian many-worlds or Bohmian mechanics, the statistical data are predicted by the forward-in-time evolution of the quantum state of the four-particle system according to the Schrödinger equation. In that case, the quantum state of Alice and Bob's particles is separable at any time from the beginning to the end of the experiment. In other words, according to these interpretations, in delayed-choice entanglement swapping (DCES), there is no entanglement swapped.

One way out is simply to abandon the ##\Psi##-ontic stance and consider the wave function as mere information, as in ##\Psi##-epistemic interpretations, such as Qbism, relational quantum mechanics or the information-based interpretation favoured by Zeillinger himself (https://link.springer.com/article/10.1023/A:1018820410908). However, if we insist on considering this correlation (entanglement) as something "ontic", and we look for some causal explanation with the wave function playing a role in it, then we must accept that the quantum state is not absolute, but relational. Thus, from the perspective of Victor (I follow the names employed by @DrChinese), his decision to make a BSM, together with the measurements outcomes obtained on the two particles he has access to, causes the state of Alice and Bob's particles to be updated, and entanglement arises retrocausally. In other words, there is no justification for accepting the quantum state that Victor assigns to the system, which should evolve backward-in-time, as ontic and rejecting the quantum state Alice and Bob put into the (textbook) Schrödinger equation. I'm not aware of any ##\Psi##-ontic interpretation that postulates anything like that. It would be something like a many-many-worlds interpretation.

Lucas.
 
  • #37
Sambuco said:
Hi everyone!

A new paper by Wharton & Price was submitted to ArXiv ten days ago (https://arxiv.org/abs/2507.15128). There, they discuss entanglement as a kind of preselection due to the preparation of an initial state.

I'm start to think that collider loophole (CL) deserves a thread of its own :smile:

Lucas.
I would happy to participate, as the CL really has nothing to do with retrocausality.

Thanks for the reference to the new paper. I think it does a better job of making their case. And for me, it is easier to see and describe its glaring flaws.
 
  • Like
Likes Sambuco
  • #38
Sambuco said:
In other words, it may be a useful concept within certain interpretations, such as those based largely on information, where a "mechanism" to explain entanglement is usually denied.
Any interpretation features some kind of narrative that adds "something" to orthodox textbook QM. That narrative is subject to comparison to experiment.

Suppose someone said something like "Forward-in-time-only information-type interpretations fail because they fall victim to the Collider Loophole." Well, I don't have any particular issue with that statement. That's because I believe: "Forward-in-time-only information-type interpretations fail because they are contradicted by existing experiments." That's a much stronger assertion. The collider loophole doesn't even exist/apply in the experiments I cite, it really only exists as a putative concept in the Price/Wharton* papers.


*And to be clear: I am a big fan of most of their other work.
 
  • #39
Sambuco said:
1. As an aside... For well-known ##\Psi##-ontic interpretations, such as Everettian many-worlds or Bohmian mechanics, the statistical data are predicted by the forward-in-time evolution of the quantum state of the four-particle system according to the Schrödinger equation.

2. One way out is simply to abandon the ##\Psi##-ontic stance and consider the wave function as mere information, as in ##\Psi##-epistemic interpretations, such as Qbism, relational quantum mechanics or the information-based interpretation favoured by Zeillinger himself (https://link.springer.com/article/10.1023/A:1018820410908).

Lucas.
1. I find those to be in direct conflict with DCES experiments such as cited. But unfortunately, I cannot locate sufficiently detailed statements by the well-known advocates of those interpretations in order to highlight such conflict in a suitable manner*. For example, Vaidman's MWI page in Plato skips this entirely. Ditto the page on Bohmian Mechanics. Generally, what papers address swapping employ substantial handwaving without proving any real meat. Referencing Schrödinger equation evolution won't cut it, as there is a remote change of state (by definition) in all swapping experiments.

2. I of course am often surprised by what experimentalists hold as interpretations, and specifically Zeilinger**. All of his work points away from information-type interpretations. I wonder if anyone has seen any recent statements about his preferred interpretation.


*Because the rules of that interpretation seem to change when challenged.
**Who is also of course a master theorist. :smile:
 
  • #40
Sambuco said:
One way out is simply to abandon the -ontic stance and consider the wave function as mere information, as in -epistemic interpretations, such as Qbism, relational quantum mechanics or the information-based interpretation favoured by Zeillinger himself (https://link.springer.com/article/10.1023/A:1018820410908). However, if we insist on considering this correlation (entanglement) as something "ontic", and we look for some causal explanation with the wave function playing a role in it, then we must accept that the quantum state is not absolute, but relational. Thus, from the perspective of Victor (I follow the names employed by @DrChinese), his decision to make a BSM, together with the measurements outcomes obtained on the two particles he has access to, causes the state of Alice and Bob's particles to be updated, and entanglement arises retrocausally. In other words, there is no justification for accepting the quantum state that Victor assigns to the system, which should evolve backward-in-time, as ontic and rejecting the quantum state Alice and Bob put into the (textbook) Schrödinger equation. I'm not aware of any -ontic interpretation that postulates anything like that. It would be something like a many-many-worlds interpretation.
Is this related to the fact that retrocausality violates the PBR theorem's assumption that it's possible to prepare systems independently of each other, thus nullifying it's conclusion that any hidden variable model must incorporate an ontic ##\Psi##? Does the relationality of the wave function reduce it to being merely epistemic? (I'm just noticing that the toy HV model I presented earlier makes no explicit use of the wave function🤔)
 
  • #41
Sambuco said:
The "entanglement" in entanglement swapping, and especially in the delayed-choice version (https://arxiv.org/abs/1203.4834) is not an interpretation-independent fact about the quantum state of Alice and Bob's particles.
Careful. Write down any quantum state you like: whether or not that state is entangled is a mathematical fact about the state (whether or not it can be expressed as a product of states of subsystems--if it can't, it's entangled), and does not depend on any interpretation.

What does depend on the interpretation is what physical meaning is assigned to quantum states, and which quantum states are taken to be the "correct" ones to describe particular scenarios.
 
  • Like
Likes DrChinese and Sambuco
  • #42
PeterDonis said:
Careful. Write down any quantum state you like: whether or not that state is entangled is a mathematical fact about the state (whether or not it can be expressed as a product of states of subsystems--if it can't, it's entangled), and does not depend on any interpretation.

What does depend on the interpretation is what physical meaning is assigned to quantum states, and which quantum states are taken to be the "correct" ones to describe particular scenarios.
Thanks @PeterDonis for your comment! I should have said "(...) is not an interpretation-independent fact about Alice and Bob's particles," without including the term "quantum state".
What do you think is best? Should I edit my post to avoid confusion for future readers, or should I leave it as is?

Lucas.
 
  • Like
Likes DrChinese
  • #43
DrChinese said:
the CL really has nothing to do with retrocausality
I have a similar view. I think collider loophole could be a useful concept in information-based interpretations, but not in more "realistic" ones, where one attempts to find a mechanism to explain entanglement, and in that attempt, retrocausality appears as a possibility.

DrChinese said:
"Forward-in-time-only information-type interpretations fail because they are contradicted by existing experiments."
Well, information-based interpretations typically don't pay much attention to the time direction of state evolution, precisely because they assume that states represent mere information. I personally think they fit well with entanglement swapping, where temporal order doesn't change the results.

Lucas.
 
  • Like
Likes DrChinese
  • #44
DrChinese said:
1. I find those to be in direct conflict with DCES experiments such as cited. But unfortunately, I cannot locate sufficiently detailed statements by the well-known advocates of those interpretations in order to highlight such conflict in a suitable manner*. For example, Vaidman's MWI page in Plato skips this entirely. Ditto the page on Bohmian Mechanics. Generally, what papers address swapping employ substantial handwaving without proving any real meat. Referencing Schrödinger equation evolution won't cut it, as there is a remote change of state (by definition) in all swapping experiments.
I believe these ##\Psi##-ontic interpretations assume that the quantum state evolves (forward-in-time) as predicted by Schrödinger equation, just as in Mjelva's paper. For DCES, the remote change of the state occurs when Alice and Bob measure their particles, causing the global four-particle state (including the other two other particles) to be projected onto the state corresponding to the measurement outcomes observed by Alice and Bob.

Unlike information-based interpretations, such as the one favoured by Zeilinger, these ##\Psi##-ontic interpretations conclude that the quantum state of the system, at any time during the delayed-choice version of the experiment, doesn't show entanglement between the particles to which Alice and Bob have access to. This is what I found "problematic" about how these interpretations address entanglement swapping. They offer two completely different stories about what happen in the delayed and non-delayed cases, ignoring the fact that both experiments yield exactly the same results regardless of the temporal order.

Lucas.
 
  • #45
DrChinese said:
4. The signature buckets are exactly the same in BSM/SSM experiments such as Megidish et al (see Fig. 3) but yes, slightly different (although of no significance) in Ma et al as you have argued.
In both the Ma and Megidish experiment, the state of the biparticle system incident on Charles's apparatus is rotated during a successful BSM. In the Ma experiment, this rotation means the relevant polarization buckets in BSM runs cannot be identified with their corresponding buckets in SSM runs. Similarly in the Megedish experiment: The rotation of indistinguishable vs distinguishable photons breaks the correspondence of the polarization buckets in successful vs failed BSMs. They amount to different selection procedures.

No productive discussion about retrocausality or collider bias can be had until you understand this.
 
Last edited:
  • #46
Sambuco said:
Well, information-based interpretations typically don't pay much attention to the time direction of state evolution, precisely because they assume that states represent mere information. I personally think they fit well with entanglement swapping, where temporal order doesn't change the results.
As an aside, forward-in-time accounts based on ordinary undergraduate-level formalisms of QM are perfectly capable of reproducing all correlations seen in these experiments.
 
  • Like
Likes Sambuco and gentzen
  • #47
Morbert said:
In both the Ma and Megidish experiment, the state of the biparticle system incident on Charles's apparatus is rotated during a successful BSM. This rotation means the relevant polarization buckets in BSM runs cannot be identified with their corresponding buckets in SSM runs. They amount to different selection procedures.

No productive discussion about retrocausality or collider bias can be had until you understand this.
As I have said repeatedly: The difference you describe (BSM vs. SSM) is true but not scientifically significant for the Ma experiment. I acknowledge you dispute this, despite this being one of the most commonly cited of swapping experiments (and by a top team). But for the sake of argument, let's return to the other experiment and see...

What should be relevant and important from your perspective: That same technique (the difference between BSM and SSM) is NOT used in the Megidish experiment. Everything in the setup for both BSM and SSM is 100% identical as far as wave plates, polarizers, BS and PBS, detectors, etc. The setup is in Fig. 2.

Note that there are rotations of wave plates in the main experiment, and this is done in order to perform a full quantum state tomography (QST). So don't let that fool you into seeing a parallel with the Ma experiment. Unlike Ma, the main Megidish experiment does NOT compare BSM to SSM - and the mechanism to perform the QST operates exactly the same in all variations. So there is no difference in count rates for a full QST (approx. 4320 over 6 minutes).

Also capable of confusing: The Megidish experiment uses a very unusual technique to measure the photons' polarizations. They use a single set of polarizing beam splitters to measure all 4 photons. Normally there would be 3 sets (one set each for Alice, Bob and Victor). In Megidish, photon 1 always arrives first to the detectors; photon 4 arrives last; and (when a BSM occurs) photons 2 and 3 physically overlap and arrive in the same time window.

The ONLY difference is that for our relevant variation: They made the 2 & 3 photons distinguishable, casting them into an SSM: "One can also choose to introduce distinguishability between the two projected photons." They do this by adding a small extra distance to one of the paths. From the paper: "we introduced a sufficient temporal delay between the two projected photons (see Fig. 3c)." So one photon (call it photon 2) arrives a bit later than photon 3. The results so obtained are unquestionably apples to apples, in contradiction to your assertion about different buckets. And they show that PHYSICAL overlap in the beam splitter is an absolute requirement for swapping. Added sufficient temporal delay => no BS overlap => no correlation*. The choice of a time delay (or not) is the (only) independent variable. Note also that in this experiment, there is no rapid random switching between BSM or SSM.

No physical overlap, no Entangled State statistics. And although this particular experiment does not feature remote delayed choice (DCES), it should go without question that the results would not change if it had included that. When you combine Ma and Megidish results: Alice and Bob can perform their measurements in the absolute relative past of Victor's choice to entangle (BSM) or not (SSM), leading to Victor's choice being identified as the causal agent for the results.

*Note that on the QST results (Fig. 3), there is correlation in all 3 graphs for the HHHH> and VVVV> bars. These are the cases for which all 4 photons are measured on the same HV basis, which always produces correlation and is usually labeled as separable state. To see proper swapping, the 2 & 3 photons must be measured on a different basis than 1 & 4.
 
Last edited:
  • #48
Morbert said:
As an aside, forward-in-time accounts based on ordinary undergraduate-level formalisms of QM are perfectly capable of reproducing all correlations seen in these experiments.
Don't think so. But if you have a reference that actually addresses Ma and/or Megidish in detail, then please share it with me. (Note that orthodox QM does not feature a forward-in-time explanation.)

As we discussed to death in another thread, Mjelva provided a detail account as you mention. It's definitely the best I have seen, and nicely goes through every single step for the Ma and Megidish type experiments for forward-in-time-only interpretations. It also does the same for MWI.

However, it has a fatal flaw (explained in that thread), which is contradicted by experiment*. This is why retrocausal (or acausal as @RUTA calls it) type interpretations have advantages. Only a complete quantum context, including future measurement settings, properly leads to the expectation.


*It took me until about post #86/89 or so in that thread to figure it out! So definitely was not obvious to me...
 
Last edited:
  • #49
DrChinese said:
I acknowledge you dispute this, despite this being one of the most commonly cited of swapping experiments (and by a top team). So for the sake of argument, let's return to the other experiment.
I do not dispute any experimental facts. Everything I have said is fully consistent with Ma's paper. What I dispute is your incorrect understanding of Ma's paper. Like in the Megedish experiment, the time-evolution of the incident 2 and 3 photons breaks the correspondence between polarization buckets across runs with and without a BSM.
DrChinese said:
The ONLY difference is that for our relevant variation: They made the 2 & 3 photons distinguishable, casting them into an SSM
I should have labelled my edit in my previous post:
Morbert said:
Similarly in the Megedish experiment: The rotation of indistinguishable vs distinguishable photons breaks the correspondence of the polarization buckets in successful vs failed BSMs. They amount to different selection procedures.
I.e. What you are doing is looking at the polarization signature of two rotated indistinguishable particles and inferring, counterfactually, the same polarization signature if they had been distinguishable.
DrChinese said:
However, it has a fatal flaw (explained in that thread), which is contradicted by experiment.
It is not contradicted by experiment.
 
Last edited:
  • #50
MrRobotoToo said:
Is this related to the fact that retrocausality violates the PBR theorem's assumption that it's possible to prepare systems independently of each other, thus nullifying it's conclusion that any hidden variable model must incorporate an ontic Ψ?
I'm not well versed with all the nuances of the PBR theorem. I believe @DrChinese has studied it in more detail. Do you have anything to say about the relationship between preparation independence and retrocausality.

MrRobotoToo said:
Does the relationality of the wave function reduce it to being merely epistemic?
Good question! As far as I know, relational (or perspectival) interpretations tend to be ##\Psi##-epistemic. Personally, I'm think that it would be possible to formulate a relational ##\Psi##-ontic interpretation in a consistent way.

Lucas.
 
  • Like
Likes MrRobotoToo
Back
Top