Insights Superdeterminism and the Mermin Device

RUTA
Science Advisor
Insights Author
Messages
1,497
Reaction score
538
Superdeterminism as a way to resolve the mystery of quantum entanglement is generally not taken seriously in the foundations community, as explained in this video by Sabine Hossenfelder (posted in Dec 2021). In her video, she argues that superdeterminism should be taken seriously, indeed it is what quantum mechanics (QM) is screaming for us to understand about Nature. According to her video per the twin-slit experiment, superdeterminism simply means the particles must have known at the outset of their trip whether to go through the right slit, the left slit, or both slits, based on what measurement was going to be done on them. Thus, she defines superdeterminism this way:
Superdeterminism: What a quantum particle does depends on what measurement will take place.
In Superdeterminism: A Guide for the Perplexed she gives a bit more technical definition:





Theories that do not...

Continue reading...
 
Last edited:
  • Like
Likes Lynch101, Jarvis323 and DrChinese
Physics news on Phys.org
In
Superdeterminism: A Guide for the Perplexed
one reads:

"The two biggest problem with superdeterminism at the moment are (a) the lack of a generally applicable fundamental theory and (b) the lack of experiment."
 
  • Like
  • Haha
Likes Demystifier and valenumr
Nice article. Obviously, one of the problems when discussing Superdeterminsm in a serious manner is that there really aren't any true candidates to knock down by way of experiment. I appreciate that Hossenfelder is trying to present something that could be knocked down, but in all honesty that won't reduce the seemingly infinite supply of alternate candidates.

So yes, you could simply say there are violations of Statistical Independence (SI). In your example, the 32/23 combinations produce a different outcome sets for certain hidden variable combinations. In an actual experiment with the 32 or 23 settings as static, you'd notice that immediately (since the results must still match prediction) - so there must be more to the SI violation so that this result does not occur. And just like that, the Superdeterminism candidate must morph yet again.

But one thing has mystified me in these discussions. Always, the superdeterminism candidate is presented as local (since the ultimate purpose of the candidate is to restore determinism and locality). In your reference to the Chen article, there is reference to the idea that "one can always conjecture that some factor in the overlap of the backwards light cones has controlled the presumably random choices".

There are at least 2 problems with this. First problem, that there needs to be *random choices* for the experiment to show SI. Unless the superdeterministic candidate has a mechanism to discern whether the measurement choice is being varied or not, it need not be random or subject to free will at all - it could be static (and therefore not demonstrate any deviation from the quantum mechanical expectation value - which is what we needed to demonstrate as a viable possibility). Second, and most importantly, the entangled particles subject to the Bell test need not have ever existed in a common light cone. And in fact, they need not have ever co-existed to be entangled. Or come from a common source at all. I'll supply more references as needed.

https://arxiv.org/abs/1209.4191
Entanglement Between Photons that have Never Coexisted

Yes, there seems to be a violation of SI. But that must simply be a result of quantum contextuality itself. And such contextuality must have a nonlocal element. But it is a special kind of nonlocal, not the kind that appears in Bohmian Mechanics (for example) where there are instantaneous effects regardless of distance. The overall context is traced out by components that are individually limited to c.
 
  • Like
Likes Jarvis323 and PeroK
DrChinese said:
Nice article. Obviously, one of the problems when discussing Superdeterminsm in a serious manner is that there really aren't any true candidates to knock down by way of experiment. I appreciate that Hossenfelder is trying to present something that could be knocked down, but in all honesty that won't reduce the seemingly infinite supply of alternate candidates.
There is also an infinite supply of non-local candidate theories. In fact, there is an infinite supply of theories in most paradigms, so this isn't a proper argument against superdeterminism. And superdeterministic theories are actually much harder to construct than non-local ones, so the supply of superdeterministic theories is in some sense even smaller than the supply of non-local ones.
 
When I click on the link to "Mermin's Challenge", it says I'm not allowed to edit that item.
 
PeroK said:
When I click on the link to "Mermin's Challenge", it says I'm not allowed to edit that item.
Sorry, I accidentally inserted my link as an author to that Insight. Hopefully, it's fixed now :-)
 
Lord Jestocost said:
In
Superdeterminism: A Guide for the Perplexed
one reads:

"The two biggest problem with superdeterminism at the moment are (a) the lack of a generally applicable fundamental theory and (b) the lack of experiment."
In the New Scientist article, it did say Ghosh was planning an experiment proposed by Sabine.
 
DrChinese said:
In your example, the 32/23 combinations produce a different outcome sets for certain hidden variable combinations. In an actual experiment with the 32 or 23 settings as static, you'd notice that immediately (since the results must still match prediction) - so there must be more to the SI violation so that this result does not occur. And just like that, the Superdeterminism candidate must morph yet again.
If the instruction sets are produced in concert with the settings as shown in the Table, the 23/32 setting would produce agreement in 1/4 of the trials, which matches QM.
 
DrChinese said:
Second, and most importantly, the entangled particles subject to the Bell test need not have ever existed in a common light cone. And in fact, they need not have ever co-existed to be entangled. Or come from a common source at all. I'll supply more references as needed.

https://arxiv.org/abs/1209.4191
Entanglement Between Photons that have Never Coexisted
I don't see the relevance of this article. If you can locally explain the correlations of an ordinary Bell pair, you can also locally explain the results of this paper. This is just entanglement swapping, which can be explained locally (https://arxiv.org/abs/quant-ph/9906007).
 
  • #10
RUTA said:
If the instruction sets are produced in concert with the settings as shown in the Table, the 23/32 setting would produce agreement in 1/4 of the trials, which matches QM.
[And of course I know you are not arguing in favor of superdeterminism in your Insight.]

My mistake, yes, I see now that is a feature of the table. So basically, that would equivalent to saying that for the 23 case (to be specific) - which in the table is marked as follows (where rows 2/3/4, the 23 cases will add to 100% as follows:

-Row 2 occurs 50% of the time when 23 measurement occurs. Because this is doubled by design/assumption.
-Row 3 occurs 25% of the time when 23 measurement occurs.
-Row 4 occurs 25% of the time when 23 measurement occurs.
Total=100%

Could just as rationally be:

-Row 2 occurs 75% of the time when 23 measurement occurs.
-Row 3 occurs 0% of the time when 23 measurement occurs. This is so rows 2 and 3 add to 75%.
-Row 4 occurs 25% of the time when 23 measurement occurs.
Total=100%...Since the stats add up the same. After all, that would cause the RRG & GGR cases to be wildly overrepresented, but since we are only observing the 23 measurements, we don't see that the other (unmeasured) hidden variable (1) occurrence rate is dependent on the choice of the 23 basis. Since we are supposed to believe that it exists (that is the purpose of a hidden variable model) but has a value that makes it inconsistent with the 23 pairing (of course this inconsistency is hidden too).

Obviously, in this table, we have statistical independence as essentially being the same thing as contextuality. Which more or less orthodox QM is anyway. Further, it has a quantum nonlocal element, as the Bell tests with entanglement swapping indicate*. That is because the "23" choice was signaled to the entangled source so that the Row 2 case can be properly overrepresented. But in an entanglement swapping setup, there are 2 sources! So now we need to modify our superdeterministic candidate to account for a mechanism whereby the sources know to synchronize in such a way as to yield the expected results. Which is more or less orthodox QM anyway.

My point is that the explicit purpose of a Superdeterministic candidate is to counter the usual Bell conclusion, that being: a local realistic theory (read: noncontextual causal theory) is not viable. Yet we now are stuck with a superdeterministic theory which features local hidden variables, except they are also contextual and quantum nonlocal. I just don't see the attraction.

*Note that RBW does not have any issue with this type of quantum nonlocal element. The RBW diagrams can be modified to account for entanglement swapping, as I see it (at least I think so). RBW being explicitly contextual, as I understand it: "...spatiotemporal relations provide the ontological basis for our geometric interpretation of quantum theory..." And on locality: "While non-separable, RBW upholds locality in the sense that there is no action at a distance, no instantaneous dynamical or causal connection between space-like separated events [and there are no space-like worldlines.]" [From one of your papers (2008).]
 
Last edited:
  • #11
Nullstein said:
I don't see the relevance of this article. If you can locally explain the correlations of an ordinary Bell pair, you can also locally explain the results of this paper. This is just entanglement swapping, which can be explained locally (https://arxiv.org/abs/quant-ph/9906007).
In your reference (1999), its Figure 3 is not a fair representation of the experiment I presented (2012). Deutsch is not what I would call a universally accepted writer on the subject, and "is a legendary advocate for the MWI". "Quantum nonlocality" is generally accepted in current science. And we've had this discussion here many a time. The nature of that quantum nonlocality has no generally accepted mechanism, however. That varies by interpretation.

The modern entanglement swapping examples have entangled particles which never exist in a common backward light cone - so they cannot have had an opportunity for a local interaction or synchronization. The A and B particles are fully correlated like any EPR pair, even though far apart and having never interacted*. That is quantum nonlocality, plain and simple. There are no "local" explanations for entanglement swapping of this type (that I have seen), of course excepting interpretations that claim to be fully local and forward in time causal (which I would probably dispute). Some MWI proponents makes this claim, although not all.*Being from independent photon sources, and suitably distant to each other.
 
  • #12
Interesting to read there are some experiments planned to give evidence for (or against) superdeterminism.
 
  • #13
Super-determinism is super-nonlocal

In her "Guide for the Perplexed", Sabine Hossenfelder writes:
"What does it mean to violate Statistical Independence? It means that fundamentally everything in the universe is connected with everything else, if subtly so. You may be tempted to ask where these connections come from, but the whole point of superdeterminism is that this is just how nature is. It’s one of the fundamental assumptions of the theory, or rather, you could say one drops the usual assumption that such connections are absent. The question for scientists to address is not why nature might choose to violate Statistical Independence, but merely whether the hypothesis that it is violated helps us to better describe observations."

I think this quote perfectly explains how super-determinism avoids quantum nonlocality. Not by replacing it with locality, but by replacing it with super-nonlocality!

In ordinary nonlocality, things get correlated over distance due to nonlocal forces. In super-nonlocality one does not need forces for that. They are correlated just because the fundamental laws of Nature (or God if you will) say so. You don't need to fine tune the initial conditions to achieve such correlations, it's a general law that does not depend on initial conditions. You don't need any complicated conspiracy for that, in principle it can be a very simple general law.
 
  • Like
Likes gentzen and PeroK
  • #14
Demystifier said:
Super-determinism is super-nonlocal

In her "Guide for the Perplexed", Sabine Hossenfelder writes:
"What does it mean to violate Statistical Independence? It means that fundamentally everything in the universe is connected with everything else, if subtly so. You may be tempted to ask where these connections come from, but the whole point of superdeterminism is that this is just how nature is. It’s one of the fundamental assumptions of the theory, or rather, you could say one drops the usual assumption that such connections are absent. The question for scientists to address is not why nature might choose to violate Statistical Independence, but merely whether the hypothesis that it is violated helps us to better describe observations."

I think this quote perfectly explains how super-determinism avoids quantum nonlocality. Not by replacing it with locality, but by replacing it with super-nonlocality!

In ordinary nonlocality, things get correlated over distance due to nonlocal forces. In super-nonlocality one does not need forces for that. They are correlated just because the fundamental laws of Nature (or God if you will) say so. You don't need to fine tune the initial conditions to achieve such correlations, it's a general law that does not depend on initial conditions. You don't need any complicated conspiracy for that, in principle it can be a very simple general law.
I thought that the motivation for superdeterminism wasn't to avoid nonlicality but nonclassicality. The claim being that the world is really classical, but somehow (through the special initial conditions and evolution laws) it only seems like quantum theory is a correct description.
 
  • #15
martinbn said:
I thought that the motivation for superdeterminism wasn't to avoid nonlicality but nonclassicality. The claim being that the world is really classical, but somehow (through the special initial conditions and evolution laws) it only seems like quantum theory is a correct description.
What exactly do you mean by non-classicality?
 
  • #16
DrChinese said:
In your reference (1999), its Figure 3 is not a fair representation of the experiment I presented (2012).
Why do you think so?
DrChinese said:
Deutsch is not what I would call a universally accepted writer on the subject, and "is a legendary advocate for the MWI".
That's an ad hominem argument and also a personal opinion. Deutsch is a well respected physicist working in quantum foundations and quantum computing at Oxford university who has received plenty of awards.
DrChinese said:
"Quantum nonlocality" is generally accepted in current science. And we've had this discussion here many a time. The nature of that quantum nonlocality has no generally accepted mechanism, however. That varies by interpretation.
"Quantum nonlocality" is just another word for Bell-violating, which is of course accepted, because it has been demonstrated experimentally and nobody has denied that. The open question is the mechanism and superdeterminism is one way to address it.
DrChinese said:
The modern entanglement swapping examples have entangled particles which never exist in a common backward light cone - so they cannot have had an opportunity for a local interaction or synchronization.
That is not necessary, because entanglement swapping isn't really a physical process. It's essentially just an agreement between two distant parties, which photons to use for subsequent experiments. These two parties need information from a third party who has access to particles from both Bell pairs and who communicates with them over a classical channel at sub light speed.
DrChinese said:
The A and B particles are fully correlated like any EPR pair, even though far apart and having never interacted*. That is quantum nonlocality, plain and simple.
Again, nobody denied this. The statement is that entanglement produced by entanglement swapping can be explained if the correlations in the original Bell pairs can be explained. Entanglement swapping doesn't add to the mystery.
 
  • #17
DrChinese said:
[And of course I know you are not arguing in favor of superdeterminism in your Insight.]
Correct, I'm definitely not arguing for SD. If Statistical Independence is violated and QM needs to be underwritten as Sabine suggests, physics is not going to be fun for me anymore :cry:

DrChinese said:
Could just as rationally be:

-Row 2 occurs 75% of the time when 23 measurement occurs.
-Row 3 occurs 0% of the time when 23 measurement occurs. This is so rows 2 and 3 add to 75%.
-Row 4 occurs 25% of the time when 23 measurement occurs.
Total=100%...Since the stats add up the same. After all, that would cause the RRG & GGR cases to be wildly overrepresented, but since we are only observing the 23 measurements, we don't see that the other (unmeasured) hidden variable (1) occurrence rate is dependent on the choice of the 23 basis. Since we are supposed to believe that it exists (that is the purpose of a hidden variable model) but has a value that makes it inconsistent with the 23 pairing (of course this inconsistency is hidden too).
Right, we have no idea exactly how Statistical Independence is violated because it's all "hidden." My Table 1 is just an example to show what people refer to as the "conspiratorial nature" of SD. Sabine addresses that charge in her papers, too.

DrChinese said:
Obviously, in this table, we have statistical independence as essentially being the same thing as contextuality. Which more or less orthodox QM is anyway. Further, it has a quantum nonlocal element, as the Bell tests with entanglement swapping indicate*. That is because the "23" choice was signaled to the entangled source so that the Row 2 case can be properly overrepresented. But in an entanglement swapping setup, there are 2 sources! So now we need to modify our superdeterministic candidate to account for a mechanism whereby the sources know to synchronize in such a way as to yield the expected results. Which is more or less orthodox QM anyway.

My point is that the explicit purpose of a Superdeterministic candidate is to counter the usual Bell conclusion, that being: a local realistic theory (read: noncontextual causal theory) is not viable
. Yet we now are stuck with a superdeterministic theory which features local hidden variables, except they are also contextual and quantum nonlocal. I just don't see the attraction.
Sabine points out that to evade the conclusion of Bell, you need to violate either or both of Statistical Independence or/and locality. She also points out that SD could violate both, giving you the worst of both worlds. I'll let the experimental evidence decide, but I'm definitely hoping for no SD 🙂

DrChinese said:
*Note that RBW does not have any issue with this type of quantum nonlocal element. The RBW diagrams can be modified to account for entanglement swapping, as I see it (at least I think so). RBW being explicitly contextual, as I understand it: "...spatiotemporal relations provide the ontological basis for our geometric interpretation of quantum theory..." And on locality: "While non-separable, RBW upholds locality in the sense that there is no action at a distance, no instantaneous dynamical or causal connection between space-like separated events [and there are no space-like worldlines.]" [From one of your papers (2008).]
We uphold locality and Statistical Independence in our principle account of QM as follows. Locality is not violated because there is no superluminal signaling. Statistical Independence is not violated because Alice and Bob faithfully reproduce the most accurate possible QM state and make their measurements in accord with fair sampling (independently and randomly). In this principle account of QM (ultimately based on quantum information theorists' principle of Information Invariance & Continuity, see the linked paper immediately preceding), the state being created in every trial of the experiment is the best one can do, given that everyone must measure the same value for Planck's constant regardless of the orientation of their measurement device. RBW is nonetheless contextual in the 4D sense, see here and here for our most recent explanations of that.
 
  • #18
RUTA said:
1. Locality is not violated because there is no superluminal signaling.

RUTA said:
2. RBW is nonetheless contextual in the 4D sense, see here and here for our most recent explanations of that.

1. Certainly, RBW is local in the sense that Bohmian Mechanics is not. Bohmian Mechanics does not feature superluminal signaling either, and we know we don't want to label BM as "local".

So I personally wouldn't label RBW as you do ("local"); as I label it "quantum nonlocal" when a quantum system (your block) has spatiotemporal extent (your 4D contextuality) - as in the Bell scenario. But I see why you describe it as you do: it's because c is respected when you draw context vertices on a block (not sure that is the correct language to describe that). 2. Thanks for the references.
 
  • #19
Nullstein said:
That's an ad hominem argument
So is your response (although since your statements about Deutsch are positive, it might be better termed a "pro homine" argument):

Nullstein said:
Deutsch is a well respected physicist working in quantum foundations and quantum computing at Oxford university who has received plenty of awards.
None of these are any more relevant to whether Deutsch's arguments are valid than the things you quoted from @DrChinese. What is relevant is what Deutsch actually says on the topic and whether it is logically coherent and consistent with what we know from experiments.
 
  • #20
PeterDonis said:
So is your response (although since your statements about Deutsch are positive, it might be better termed a "pro homine" argument):None of these are any more relevant to whether Deutsch's arguments are valid than the things you quoted from @DrChinese. What is relevant is what Deutsch actually says on the topic and whether it is logically coherent and consistent with what we know from experiments.
I agree, I just wanted to defend Deutsch's reputation and point out that DrChinese did not address the argument in the article. I considered his response quite rude, given that I just argued based on a legit article by a legit researcher. The article contains detailed calculations, so one could just point to a mistake if there was one.
 
  • #21
DrChinese said:
1. Certainly, RBW is local in the sense that Bohmian Mechanics is not. Bohmian Mechanics does not feature superluminal signaling either, and we know we don't want to label BM as "local".

So I personally wouldn't label RBW as you do ("local"); as I label it "quantum nonlocal" when a quantum system (your block) has spatiotemporal extent (your 4D contextuality) - as in the Bell scenario. But I see why you describe it as you do: it's because c is respected when you draw context vertices on a block (not sure that is the correct language to describe that). 2. Thanks for the references.
I should be more precise, sorry. In the quantum-classical contextuality of RBW, "local" means "nothing moving superluminally" and that includes "no superluminal information exchange." That's because in our ontology, quanta of momentum exchanged between classical objects don't have worldlines.

Notice also that when I said RBW does not entail violations of Statistical Independence because the same state is always faithfully produced and measured using fair sampling, I could have said that about QM in general. Again, what I said follows from the fact that there are no "quantum entities" moving through space, there is just the spatiotemporal (4D) distribution of discrete momentum exchanges between classical objects. And that quantum-classical contextuality conforms to the boundary of a boundary principle and the relativity principle.

So, how do we explain violations of Bell's inequality without nonlocal interactions, violations of Statistical Indendence, or "shut up and calculate" (meaning "the formalism of QM works, so it's already 'explained'")? Our principle explanation of Bell state entanglement doesn't entail an ontology at all, so there is no basis for nonlocality or violations of Statistical Independence. And, it is the same (relativity) principle that explains time dilation and length contraction without an ontological counterpart and without saying, "the Lorentz transformations work, so it's already 'explained'". So, we do have an explanation of Bell state entanglement (and therefore of the Tsirelson bound).
 
  • #22
RUTA said:
From the experiment that they are trying to perform, the hope seems to be that if the detectors are prepared similar enough then statistical independence will be violated. In other words, that what we've seen as the statistical independence of measurements is an artifact of detectors in different states. Is that a correct reading here?
 
  • #23
jbergman said:
From the experiment that they are trying to perform, the hope seems to be that if the detectors are prepared similar enough then statistical independence will be violated. In other words, that what we've seen as the statistical independence of measurements is an artifact of detectors in different states. Is that a correct reading here?
Well, my understanding is that the experiments will check for violations of randomness where QM predicts randomness. That means we should look for a theory X underwriting QM so that QM is just a statistical approximation to theory X. Then you can infer the existence of hidden variables (hidden in QM, but would be needed in theory X) whence either or both of nonlocality or/and the violation of Statistical Independence. Just my guess, I don't study SD.
 
  • #24
Nullstein said:
1. That's an ad hominem argument and also a personal opinion. Deutsch is a well respected physicist working in quantum foundations and quantum computing at Oxford university who has received plenty of awards.

2. That is not necessary, because entanglement swapping isn't really a physical process. It's essentially just an agreement between two distant parties, which photons to use for subsequent experiments. These two parties need information from a third party who has access to particles from both Bell pairs and who communicates with them over a classical channel at sub light speed.

3. Again, nobody denied this. The statement is that entanglement produced by entanglement swapping can be explained if the correlations in the original Bell pairs can be explained. Entanglement swapping doesn't add to the mystery.

1. Relevant here if he is stating something that is not generally accepted (and the part in quotes was actually from another physicist). Local explanations of swapping are not, and clearly that is the tenor of the paper. Deutsch is of course a respected physicist (and respected by me), that's not the issue.

2. I specifically indicated how your citation differed from swapping using independent sources outside of a common light cone (which does not occur in Deutsch's earlier paper). There are no "local" explanations for how 2 photons that have never existed within a common light cone manage to be perfectly correlated (entangled) a la Bell. That you might need information from a classical channel to postselect pairs is irrelevant. No one is saying there is FTL signaling or the like.

3. Of course swapping adds to the mystery. And it certainly rules out some types of candidate theories. How can there be a common local cause when there is no common light cone overlap?
 
  • #25
DrChinese said:
3. Of course swapping adds to the mystery. And it certainly rules out some types of candidate theories. How can there be a common local cause when there is no common light cone overlap?
Can you clarify. How can there be no common overlap? Any two past lightcones overlap.
 
  • #26
Nullstein said:
...I just wanted to defend Deutsch's reputation and point out that DrChinese did not address the argument in the article. I considered his response quite rude, given that I just argued based on a legit article by a legit researcher. The article contains detailed calculations, so one could just point to a mistake if there was one.

I read the 1999 Deutsch article, and unfortunately it did not map to my 2012 reference. There is actually no reason it should (or would), as the experiment had not been conceived at that time.

As I mentioned in another post, Deutsch's reputation does not need defending. However, I would not agree that his well-known positions on MWI should be considered as representing generally accepted science. Some are, and some are not, and I certainly think it is fair to identify positions as "personal" vs "generally accepted" when it is relevant. There is nothing wrong with having personal opinions that are contrary to orthodox science, but I believe those should be identified.

Show me where Zeilinger (or any of the authors of similar papers) says there are local explanations for the kind of swapping I describe, and we'll be on to something. From the paper: "The observed quantum correlations manifest the non-locality of quantum mechanics in spacetime." I just don't see how SD can get around this: it is supposed to restore realism and locality to QM.
 
  • #27
DrChinese said:
1. Relevant here if he is stating something that is not generally accepted (and the part in quotes was actually from another physicist). Local explanations of swapping are not, and clearly that is the tenor of the paper. Deutsch is of course a respected physicist (and respected by me), that's not the issue.
Have you conducted a representative survey or how do you know what's generally accepted? I'm not aware of a rebuttal to Deutsch's paper, so his argument must be taken to be the current state of our knowledge. It's well written and explicit and doesn't rely on vague formations, so it should be quite easy to point at a mistake if there was one.
DrChinese said:
2. I specifically indicated how your citation differed from swapping using independent sources outside of a common light cone (which does not occur in Deutsch's earlier paper).
Deutsch's argument is completely independent of the spatial and temporal order. He just tracks down the flow of information in a general entanglement swapping setting and shows that everything is perfectly local.
DrChinese said:
There are no "local" explanations for how 2 photons that have never existed within a common light cone manage to be perfectly correlated (entangled) a la Bell. That you might need information from a classical channel to postselect pairs is irrelevant. No one is saying there is FTL signaling or the like.

3. Of course swapping adds to the mystery. And it certainly rules out some types of candidate theories. How can there be a common local cause when there is no common light cone overlap?
The particles are only entangled after Alice and Bob have received Carol's information through the classical, sub light speed channel and adjust their states locally based on that information, so there is a common event in the intersection of the backwards light cones. There is just no common cause for the correlations of the original Bell pairs. Entanglement swapping just propagates this unexplained quantum correlation through this classical channel (at ##v<c##). As Deutsch puts it: The ability of quantum information to flow through a classical channel in this way, surviving decoherence, is also the basis of quantum teleportation, a remarkable phenomenon to which we now turn.
DrChinese said:
I read the 1999 Deutsch article, and unfortunately it did not map to my 2012 reference. There is actually no reason it should (or would), as the experiment had not been conceived at that time.
Entanglement swapping was known long before 2012 and Deutsch discusses it in full generality in his paper. Your paper is not about a new phenomenon, but just an experimental verification of the phenomenon of entanglement swapping, which was known long before. It contains no new theory.
DrChinese said:
As I mentioned in another post, Deutsch's reputation does not need defending. However, I would not agree that his well-known positions on MWI should be considered as representing generally accepted science. Some are, and some are not, and I certainly think it is fair to identify positions as "personal" vs "generally accepted" when it is relevant. There is nothing wrong with having personal opinions that are contrary to orthodox science, but I believe those should be identified.

Show me where Zeilinger (or any of the authors of similar papers) says there are local explanations for the kind of swapping I describe, and we'll be on to something.
This is again an argument based on reputation and not on math. Zeilinger doesn't decide what's wrong or right. If there was a mistake in Deutsch's paper, one should be able to point it out. It's written in a very clear language and exposes all the calculations.
DrChinese said:
From the paper: "The observed quantum correlations manifest the non-locality of quantum mechanics in spacetime." I just don't see how SD can get around this: it is supposed to restore realism and locality to QM.
In the literature, "quantum non-locality" just means violating Bell's inequality. Nobody denies that Bell's inequality is violated by this experiment. The mechanism is what's puzzeling.

(By the way, I don't even believe in superdeterminism. I just think the common criticism is unfair and doesn't hold water.)
 
  • #28
Nullstein said:
I'm not aware of a rebuttal to Deutsch's paper, so his argument must be taken to be the current state of our knowledge.
This is not a valid argument. There are many papers that never get rebuttals but are still incorrect.

Nullstein said:
If there was a mistake in Deutsch's paper, one should be able to point it out.
This is not a valid argument either; many papers have had mistakes that are not easily pointed out. Nor is it relevant to what @DrChinese is saying. @DrChinese is not claiming there is a mistake in Deutsch's paper. He is saying he doesn't see how Deutsch's paper addresses the kind of experiment he is talking about (entanglement swapping) at all. If you think Deutsch's paper does address that, the burden is on you to explain how.
 
  • #29
PeterDonis said:
This is not a valid argument. There are many papers that never get rebuttals but are still incorrect.
Maybe, although a paper by a famous physicist like Deutsch is much more likely to get a rebuttal if it is false, than a paper by a random PhD candidate. But as you say, the objective should be to point at a mistake in the paper and not to fight about missing rebuttals. If DrChinese claimed that it was wrong, he should be able to point at a mistake. Since I quoted that paper, the actual argument in the paper has not been addressed. We have only been arguing about reputation and rebuttals so far.
PeterDonis said:
@DrChinese is not claiming there is a mistake in Deutsch's paper. He is saying he doesn't see how Deutsch's paper addresses the kind of experiment he is talking about (entanglement swapping) at all. If you think Deutsch's paper does address that, the burden is on you to explain how.
Entanglement swapping is a special case of quantum teleportation. Deutsch discusses the general quantum teleportation framework in his paper (section 5, "Information flow in quantum teleportation"), so his paper also applies to the special case of entanglement swapping.
 
Last edited:
  • #30
Nullstein said:
"Quantum nonlocality" is just another word for Bell-violating, which is of course accepted, because it has been demonstrated experimentally and nobody has denied that. The open question is the mechanism and superdeterminism is one way to address it.
It's perhaps not a fair question but what odds would you give on SD being that mechanism? Would you bet even money on this experiment proving QM wrong? Or, 10-1 against or 100-1 against?

I wonder what Hossenfelder would be prepared to bet on it?
 
  • #31
PeroK said:
It's perhaps not a fair question but what odds would you give on SD being that mechanism? Would you bet even money on this experiment proving QM wrong? Or, 10-1 against or 100-1 against?

I wonder what Hossenfelder would be prepared to bet on it?
Personally I wouldn't bet on any classical deterministic mechanism in the first place. I find both non-locality and superdeterminism equally unappealing. Both mechanism require fine-tuning. However, if I had to choose, I would probably pick superdeterminism, just because it is less anthropocentric. There are non-local cause and effect relations everywhere but nature conspires in such a way to prohibit communication? I can't make myself believe that, but that's just my personal opinion.
 
  • Like
Likes physika, gentzen, martinbn and 1 other person
  • #32
Nullstein said:
If DrChinese claimed that it was wrong
As I have already said, he didn't. He never said anything in the Deutsch paper you referenced was wrong. He just said it wasn't relevant to the entanglement swapping experiments he has referenced. He also gave a specific argument for why (in post #11, repeated in post #24).

Your only response to that, so far, has been to assert that the Deutsch paper is relevant to the entanglement swapping experiments @DrChinese referenced, without any supporting argument and without addressing the specific reason @DrChinese gave for saying it was not.

Nullstein said:
Since I quoted that paper
I don't see any quotes from that paper in any of your posts. You referenced it, but as far as I can tell you gave no specifics about how you think that paper addresses the issues @DrChinese is raising.
 
  • #33
Nullstein said:
Personally I wouldn't bet on any classical deterministic mechanism in the first place. I find both non-locality and superdeterminism equally unappealing.
As Sabine points out, if you want a constructive explanation of Bell inequality violation, it's nonlocality and/or SD. That's why we pivoted to a principle account (from information-theoretic reconstructions of QM). We cheated a la Einstein for time dilation and length contraction.
 
  • #34
PeterDonis said:
As I have already said, he didn't. He never said anything in the Deutsch paper you referenced was wrong. He just said it wasn't relevant to the entanglement swapping experiments he has referenced. He also gave a specific argument for why (in post #11, repeated in post #24).
He just claimed it didn't apply without pointing out, which assumption in the paper would be violated. The paper discussed the most general formulation of quantum teleportation without any restrictions about spatial or temporal order (or can you point to any such restriction in the paper?), so it applies to the special case of entanglement swapping.
PeterDonis said:
Your only response to that, so far, has been to assert that the Deutsch paper is relevant to the entanglement swapping experiments @DrChinese referenced, without any supporting argument and without addressing the specific reason @DrChinese gave for saying it was not.
That's not true. I addressed his argument multiple times (post #27 and #29). Entanglement swapping is a special case of quantum teleportation, which is discussed in the paper in full generality. I even said that in the very post you currently quoted, but you just cut it away.
PeterDonis said:
I don't see any quotes from that paper in any of your posts. You referenced it, but as far as I can tell you gave no specifics about how you think that paper addresses the issues @DrChinese is raising.
See post #29, which you just quoted. I gave you exactly the section which contains the argument and explained why it applies (again: entanglement swapping is a special case of what is discussed in the paper).
 
  • #35
Nullstein said:
The open question is the mechanism and superdeterminism is one way to address it.
I agree this this but my issue with superdeterminism, is that from my perspective it is a kind of "mechanism" to justifiy a particular kind of philsophical perspective, but empty of explanatory value, and in particularly lacking algorithmic insights for an "agent"?

/Fredrik
 
  • Like
Likes DrChinese and martinbn
  • #36
martinbn said:
Can you clarify. How can there be no common overlap? Any two past lightcones overlap.
You'd think so! But these photons never existed in a common light cone because their lifespan is short. In fact, in this particular experiment, one photon was detected (and ceased to exist) BEFORE its entangled partner was created.

In this next experiment, the entangled photon pairs are spatially separated (and did coexist for a period of time). However, they were created sufficiently far apart that they never occupied a common light cone.

High-fidelity entanglement swapping with fully independent sources (2009)
https://arxiv.org/abs/0809.3991
"Entanglement swapping allows to establish entanglement between independent particles that never interacted nor share any common past. This feature makes it an integral constituent of quantum repeaters. Here, we demonstrate entanglement swapping with time-synchronized independent sources with a fidelity high enough to violate a Clauser-Horne-Shimony-Holt inequality by more than four standard deviations. The fact that both entangled pairs are created by fully independent, only electronically connected sources ensures that this technique is suitable for future long-distance quantum communication experiments as well as for novel tests on the foundations of quantum physics."

And from a 2009 paper that addresses the theoretical nature of entanglement swapping with particles with no common past, here is a quote that indicates that in fact this entanglement IS problematic for any theory claiming the usual locality (local causality):

"It is natural to expect that correlations between distant particles are the result of causal influences originating in their common past — this is the idea behind Bell’s concept of local causality [1]. Yet, quantum theory predicts that measurements on entangled particles will produce outcome correlations that cannot be reproduced by any theory where each separate outcome is locally determined by variables correlated at the source. This nonlocal nature of entangled states can be revealed by the violation of Bell inequalities.

"However remarkable it is that quantum interactions can establish such nonlocal correlations, it is even more remarkable that particles that never directly interacted can also become nonlocally correlated. This is possible through a process called entanglement swapping [2]. Starting from two independent pairs of entangled particles, one can measure jointly one particle from each pair, so that the two other particles become entangled, even though they have no common past history. The resulting pair is a genuine entangled pair in every aspect, and can in particular violate Bell inequalities.

"Intuitively, it seems that such entanglement swapping experiments exhibit nonlocal effects even stronger than those of usual Bell tests. To make this intuition concrete and to fully grasp the extent of nonlocality in entanglement swapping experiments, it seems appropriate to contrast them with the predictions of local models where systems that are initially uncorrelated are described by uncorrelated local variables. This is the idea that we pursue here."


Despite the comments from Nullstein to the contrary, such swapped pairs are entangled without any qualification - as indicated in the quote above.
 
  • #37
DrChinese said:
You'd think so! But these photons never existed in a common light cone because their lifespan is short. In fact, in this particular experiment, one photon was detected (and ceased to exist) BEFORE its entangled partner was created.
It is probably the phrasing that you use that confuses me, but I still don't understand. For example the past light cone of the event = production of the second pair of photons, contains the whole life of the first photon, that no longer exists. All the possible light cones that you can pick intersect.
 
  • #38
Nullstein said:
I find both non-locality and superdeterminism equally unappealing. Both mechanism require fine-tuning.
How does non-locality require fine tuning?

Nullstein said:
However, if I had to choose, I would probably pick superdeterminism, just because it is less anthropocentric.
How is non-locality anthropocentric?

Nullstein said:
There are non-local cause and effect relations everywhere but nature conspires in such a way to prohibit communication? I can't make myself believe that, but that's just my personal opinion.
There is no any conspiracy. See https://www.physicsforums.com/threa...unterfactual-definiteness.847628/post-5319182
 
  • #39
  • #40
RUTA said:
Well, my understanding is that the experiments will check for violations of randomness where QM predicts randomness.
Then the proponents of “superdeterminism” should tackle the radioactive decay.
 
  • Like
Likes jbergman and vanhees71
  • #41
martinbn said:
If I understand you correctly you are saying that it is (or might be) possible but it has not been found yet.
In principle, yes.

martinbn said:
Doesn't this contradict the no communication theorem?
No, because in theories such as Bohmian mechanics, the no communication theorem is a FAPP (for all practical purposes) theorem.

Roughly, this is like the law of large numbers. Is it valid for ##N=1000##? In principle, no. In practice, yes.
 
  • #42
"Superdeterminism as a way to resolve the mystery of quantum entanglement is generally not taken seriously in the foundations community, as explained in this video by Sabine Hossenfelder (posted in Dec 2021). In her video, she argues that superdeterminism should be taken seriously, indeed it is what quantum mechanics (QM) is screaming for us to understand about Nature. According to her video per the twin-slit experiment, superdeterminism simply means the particles must have known at the outset of their trip whether to go through the right slit, the left slit, or both slits, based on what measurement was going to be done on them."

Why is "superdeterminism" not taken seriously. As Shimony, Clauser and Horne put it:

"In any scientific experiment in which two or more variables are supposed to be randomly selected, one can always conjecture that some factor in the overlap of the backwards light cones has controlled the presumably random choices. But, we maintain, skepticism of this sort will essentially dismiss all results of scientific experimentation. Unless we proceed under the assumption that hidden conspiracies of this sort do not occur, we have abandoned in advance the whole enterprise of discovering the laws of nature by experimentation."
 
  • #43
DrChinese said:
You'd think so! But these photons never existed in a common light cone because their lifespan is short. In fact, in this particular experiment, one photon was detected (and ceased to exist) BEFORE its entangled partner was created.

In this next experiment, the entangled photon pairs are spatially separated (and did coexist for a period of time). However, they were created sufficiently far apart that they never occupied a common light cone.

High-fidelity entanglement swapping with fully independent sources (2009)
https://arxiv.org/abs/0809.3991
"Entanglement swapping allows to establish entanglement between independent particles that never interacted nor share any common past. This feature makes it an integral constituent of quantum repeaters. Here, we demonstrate entanglement swapping with time-synchronized independent sources with a fidelity high enough to violate a Clauser-Horne-Shimony-Holt inequality by more than four standard deviations. The fact that both entangled pairs are created by fully independent, only electronically connected sources ensures that this technique is suitable for future long-distance quantum communication experiments as well as for novel tests on the foundations of quantum physics."

And from a 2009 paper that addresses the theoretical nature of entanglement swapping with particles with no common past, here is a quote that indicates that in fact this entanglement IS problematic for any theory claiming the usual locality (local causality):

"It is natural to expect that correlations between distant particles are the result of causal influences originating in their common past — this is the idea behind Bell’s concept of local causality [1]. Yet, quantum theory predicts that measurements on entangled particles will produce outcome correlations that cannot be reproduced by any theory where each separate outcome is locally determined by variables correlated at the source. This nonlocal nature of entangled states can be revealed by the violation of Bell inequalities.

"However remarkable it is that quantum interactions can establish such nonlocal correlations, it is even more remarkable that particles that never directly interacted can also become nonlocally correlated. This is possible through a process called entanglement swapping [2]. Starting from two independent pairs of entangled particles, one can measure jointly one particle from each pair, so that the two other particles become entangled, even though they have no common past history. The resulting pair is a genuine entangled pair in every aspect, and can in particular violate Bell inequalities.
None of these articles are in contradiction to what I said. Again, nobody doubts that entanglement swapping produces Bell pairs that violate Bell's inequality. The question is: Does entanglement swapping add to the mystery? And the answer is: It does not. The article by Deutsch applies to all these experiments and therefore shows that nothing mysterious beyond the usual mystery is going on.
DrChinese said:
"Intuitively, it seems that such entanglement swapping experiments exhibit nonlocal effects even stronger than those of usual Bell tests.
You should rather have highlighted the word "intuitively," because one may come to this conclusion intuitively. But a more complete analysis just shows that nothing about entanglement swapping is any stronger or any more mysterious than ordinary Bell pairs.
DrChinese said:
Despite the comments from Nullstein to the contrary, such swapped pairs are entangled without any qualification - as indicated in the quote above.
You misrepresent what I wrote. Of course the entanglement swapped pairs are entangled, I said that multiple times. It's just that no additional mystery is added by the process of swapping. All the mystery is concentrated in the initial Bell pairs themselves. The swapping processs has a local explanation.
 
  • #44
I have never understood why there is such a visceral oppositional response to superdeterminism while the Many Worlds Interpretation and Copenhagen Interpretation enjoy tons of support. In Many Worlds the answer is essentially: "Everything happens, thus every observation is explained, you just happen to be the observer observing this outcome". In Copenhagen it's equally lazy: "Nature is just random, there is no explanation". How are these any less anti-scientific than Superdeterminism?
 
  • Skeptical
  • Like
Likes physika and PeroK
  • #45
Quantumental said:
I have never understood why there is such a visceral oppositional response to superdeterminism while the Many Worlds Interpretation and Copenhagen Interpretation enjoy tons of support. In Many Worlds the answer is essentially: "Everything happens, thus every observation is explained, you just happen to be the observer observing this outcome". In Copenhagen it's equally lazy: "Nature is just random, there is no explanation". How are these any less anti-scientific than Superdeterminism?
I think that all three somewhat break with simple ideas about how science is done and what it tells us but to a different degree.

Copenhagen says that there are limits to what we can know because we need to split the world into the observer and the observed. The MWI does away with unique outcomes. Superdeterminism does away with the idea that we can limit the influence of external degrees of freedom on the outcome of our experiment. These might turn out to be different sides of the same coin but taken at face value, the third seems like the most radical departure from the scientific method to me.

There's also the point that unlike Copenhagen and the MWI, superdeterminism is not an interpretation of an existing theory. It is a property of a more fundamental theory which doesn't make any predictions to test it (NB: Sabine Hossenfelder disagrees) because it doesn't even exist yet.
 
  • Like
Likes Lord Jestocost and DrChinese
  • #46
Lord Jestocost said:
Why is "superdeterminism" not taken seriously. As Shimony, Clauser and Horne put it:

"In any scientific experiment in which two or more variables are supposed to be randomly selected, one can always conjecture that some factor in the overlap of the backwards light cones has controlled the presumably random choices. But, we maintain, skepticism of this sort will essentially dismiss all results of scientific experimentation.
I don't want to defend superdeterminism, but I have trouble seeing the difference to placebo effects in controlled medical studies, where it is randomly determined which patient should receive which treatment. Of course, we don't tell the individual patients which treatment they got. But we would naively not expect that it has an influence whether the experimenter (i.e. the doctor) knows which patient received which treatment. But apparently it has some influence.

I have the impression that the main difference to the Bell test is that for placebo effects in medicine we can make experiments to check whether such an influence is present. But the skepticism itself that such an influence might be present in the first place doesn't seem to me very different. (And if the conclusion is that non-actionable skepticism will lead us nowhere, then I can accept this. But that is not how Shimony, Clauser and Horne have put it.)
 
  • #47
gentzen said:
But we would naively not expect that it has an influence whether the experimenter (i.e. the doctor) knows which patient received which treatment. But apparently it has some influence.
Whether this is true or not (that question really belongs in the medical forum for discussion, not here), this is not the placebo effect. The placebo effect is where patients who get the placebo (but don't know it--and in double-blind trials the doctors don't know either) still experience a therapeutic effect.

I don't see how this has an analogue in physics.
 
  • #48
In any scientific experiment in which two or more variables are supposed to be randomly selected
gentzen said:
but I have trouble seeing the difference to placebo effects in controlled medical studies
PeterDonis said:
I don't see how this has an analogue in physics.
I am not suggesting an analogue in physics, I only have trouble to "see" where that scenario occurring in controlled medical studies was excluded in what Shimony, Clauser and Horne wrote.

I also tried to guess what they actually meant, namely that if there is nothing you can do to check whether your skepticism was justified, then it just stifles scientific progress.
 
  • #49
gentzen said:
I only have trouble to "see" where that scenario occurring in controlled medical studies was excluded in what Shimony, Clauser and Horne wrote.
I don't see how the two have anything to do with each other.

gentzen said:
I also tried to guess what they actually meant
Consider the following experimental setup:

I have a source that produces two entangled photons at point A. The two photons go off in opposite directions to points B and C, where their polarizations are measured. Points B and C are each one light-minute away from point A.

At each polarization measurement, B and C, the angle of the polarization measurement is chosen 1 second before the photon arrives, based on random bits of information acquired from incoming light from a quasar roughly a billion light-years away that lies in the opposite direction from the photon source at A.

A rough diagram of the setup is below:

Quasar B -- (1B LY) -- B -- (1 LM) -- A -- (1 LM) -- C -- (1B LY) -- Quasar C

In this setup, any violation of statistical independence between the angles of the polarizations and the results of the individual measurements (not the correlations between the measurements, those will be as predicted by QM, but the statistics of each measurement taken separately) would have to be due to some kind of pre-existing correlation between the photon source at A and the distant quasars at both B and C. This is the sort of thing that superdeterminism has to claim must exist.
 
  • Like
Likes DrChinese and Lord Jestocost
  • #50
kith said:
I think that all three somewhat break with simple ideas about how science is done and what it tells us but to a different degree.

Copenhagen says that there are limits to what we can know because we need to split the world into the observer and the observed. The MWI does away with unique outcomes. Superdeterminism does away with the idea that we can limit the influence of external degrees of freedom on the outcome of our experiment. These might turn out to be different sides of the same coin but taken at face value, the third seems like the most radical departure from the scientific method to me.

There's also the point that unlike Copenhagen and the MWI, superdeterminism is not an interpretation of an existing theory. It is a property of a more fundamental theory which doesn't make any predictions to test it (NB: Sabine Hossenfelder disagrees) because it doesn't even exist yet.

I disagree. Copenhagen's "there are limits to what we can know about reality, quantum theory is the limit we can probe" is no different from "reality is made up of more than quantum theory" which SD implies. It's semantics. As for MWI, yes, but by doing away with unique outcomes (at least in the most popular readings of Everettian QM) you literally state: "The theory only makes sense if you happen to be part of the wavefunction where the theory makes sense, but you are also part of maverick branches where QM is invalidated, thus nothing can really be validated as it were pre-QM". I would argue that this stance is equally radical in its departure from what we consider the scientific method to be.
 
Back
Top