Local realism ruled out? (was: Photon entanglement and )

Click For Summary
The discussion revolves around the validity of local realism in light of quantum mechanics and Bell's theorem. Participants argue that existing experiments have not conclusively ruled out local realism due to various loopholes, such as the detection and locality loopholes. The Bell theorem is debated, with some asserting it demonstrates incompatibility between quantum mechanics and local hidden variable theories, while others claim it does not definitively negate local realism. References to peer-reviewed papers are made to support claims, but there is contention over the interpretation of these findings. Overall, the conversation highlights ongoing disagreements in the physics community regarding the implications of quantum entanglement and the measurement problem on local realism.
  • #121
akhmeteli said:
If I am not mistaken, you mentioned recently that Bohm's theory is superdeterministic.That seems reasonable. Furthermore, maybe unitary evolution is also, strictly speaking, superdeterministic. Indeed, it can include all observers and instruments, at least in principle. So my question is: What does this mean for nonlocality of Bohm's theory?
Bohmian mechanics is both superdeterministic and nonlocal. It should not be surprising, because Bohmian mechanics uses the wave function, and wave function is a nonlocal and deterministic object.
 
Physics news on Phys.org
  • #122
Demystifier said:
Bohmian mechanics is both superdeterministic and nonlocal. It should not be surprising, because Bohmian mechanics uses the wave function, and wave function is a nonlocal and deterministic object.

I have not given much thought to superdeterminism, so please forgive me if the following question will be downright stupid.

My understanding is that superdeterminism rejects free will. So it looks like, from the point of view of Bohmian mechanics, no possible results of Bell tests can eliminate local realism, because there is no free will anyway? I know that, Bohmian mechanics or not, the "superdeterminism hole" cannot be eliminated in Bell tests, but superdeterminism is typically considered a pretty extreme notion, and now it turns out it is alive and kicking in such a relatively established approach as Bohmian?
 
  • #123
I as understand, just superdeterminism is not enough to create a loophole in Bells. In addition to superdeterminism, we also need an evil Nature, positioned BM particles in advance in a very special way, to trick the scientists and laugh at them.

In some sense that loophole is like 'Boltzmann brain' which also can not be ruled out. BTW, 'Boltzmann brain' agrument can be used even to deny QM at whole: world is just Newtonian, but 'Boltzmann brain' has memories that QM was discovered and experimentally verified.
 
  • #124
Demystifier said:
Bohmian mechanics is both superdeterministic and nonlocal. It should not be surprising, because Bohmian mechanics uses the wave function, and wave function is a nonlocal and deterministic object.
That's a useful observation. It's obvious, as you say, if you think of it. Thanks.

Do you have a view of how this meshes with arguments about free will, or do you think the issue of free will is overblown?
 
  • #126
akhmeteli said:
My understanding is that superdeterminism rejects free will.
True.

akhmeteli said:
So it looks like, from the point of view of Bohmian mechanics, no possible results of Bell tests can eliminate local realism, because there is no free will anyway?
Wrong. Bohmian mechanics is, by definition, a theory of nonlocal realism, so anything which assumes Bohmian mechanics eliminates local realism.

akhmeteli said:
I know that, Bohmian mechanics or not, the "superdeterminism hole" cannot be eliminated in Bell tests, but superdeterminism is typically considered a pretty extreme notion, and now it turns out it is alive and kicking in such a relatively established approach as Bohmian?
Superdeterminism by itself is not extreme at all. After all, classical mechanics is also superdeterministic. What is extreme is the idea that superdeterminism may eliminate nonlocality in QM. Namely, superdeterminism alone is not sufficient to eliminate nonlocality. Instead, to eliminate nonlocality, superdeterminism must be combined with a VERY SPECIAL CHOICE OF INITIAL CONDITIONS (see also the post of Dmitry67 above). It is such special conspiratorial initial conditions that is considered extreme.
 
  • #127
Demystifier said:
In my opinion, free will is only an illusion. See the attachment in
https://www.physicsforums.com/showpost.php?p=2455753&postcount=109
Fair enough, given the just-hedged-enough nature of "you think that you have free will. But it may only be an illusion". For me, I'm not willing to make strong claims on something that appears not to be so easily looked at experimentally, but OK, if we have the hedge.
 
  • #128
Peter Morgan said:
Fair enough, given the just-hedged-enough nature of "you think that you have free will. But it may only be an illusion". For me, I'm not willing to make strong claims on something that appears not to be so easily looked at experimentally, but OK, if we have the hedge.
I'm glad to see that we (you and me) think similarly.
 
  • #129
Demystifier said:
After all, classical mechanics is also superdeterministic.
Right.
What is extreme is the idea that superdeterminism may eliminate nonlocality in QM. Namely, superdeterminism alone is not sufficient to eliminate nonlocality. Instead, to eliminate nonlocality, superdeterminism must be combined with a VERY SPECIAL CHOICE OF INITIAL CONDITIONS (see also the post of Dmitry67 above). It is such special conspiratorial initial conditions that is considered extreme.
The "very special"ness is only that, given that the state of the whole experimental apparatus at the times that simultaneous events were recorded, together with the instrument settings at the time, were what they were, the state of the whole experimental apparatus and its whole past light cone at some point in the past must have been consistent with the state that we observed. From a classical deterministic dynamics point of view, this is only to say that the initial conditions now determine the initial conditions at past times (and at future times).

A thermodynamic or statistical mechanical point of view of what the state is, however, places a less stringent requirement that the thermodynamic or statistical mechanical state in the past must have been consistent with the recorded measurements that we make now. An experiment that violates Bell-CHSH inequalities makes a record, typically, of a few million events that are identified as "pairs", which is not a very tight constraint on what the state of the universe was in the backward light-cone a year ago. A probabilistic dynamics, such as that of QM, only claims that the statistics that are observed now on various ensembles of data constrain what the statistics in the past would have been if we had measured them. This kind of move to probabilistic dynamics is as open to classical modeling in space-time as it is to QM, in which we make the superdeterminism apply only to probability distributions instead of to deterministic states. To some extent this move suggests giving up particle trajectories, but of course trajectories can be added that are consistent with the probabilistic dynamics of QM, in several ways, at least including deBB, Nelson, and SED (insofar as the trajectories that we choose to add are beyond being looked at by experiment, however, we should perhaps be metaphysically rather noncommittal).
 
  • #130
From an interview with Anton Zeilinger:

I'd like to come back to these freedoms. First, if you assumed there were no freedom
of the will – and there are said to be people who take this position – then you could
do away with all the craziness of quantum mechanics in one go.


True – but only if you assume a completely determined world where everything that
happened, absolutely everything, were fixed in a vast network of cause and effect.
Then sometime in the past there would be an event that determined both my choice of
the measuring instrument and the particle's behaviour. Then my choice would no
longer be a choice, the random accident would be no accident and the action at a
distance would not be action at a distance.

Could you get used to such an idea?

I can't rule out that the world is in fact like that. But for me the freedom to ask
questions to nature is one of the most essential achievements of natural science. It's a
discovery of the Renaissance. For the philosophers and theologians of the time, it
must have seemed incredibly presumptuousness that people suddenly started
carrying out experiments and asking questions of nature and deducing laws of nature,
which are in fact the business of God. For me every experiment stands or falls with
the fact that I'm free to ask the questions and carry out the measurements I want. If
that were all determined, then the laws of nature would only appear to be laws, and
the entire natural sciences would collapse.

http://print.signandsight.com/features/614.html
 
  • #131
Hi Nikman, but note that Zeilinger has limited the discussion to thinking it has to be "complete" determinism. As he says, he can't rule complete determinism out, but he doesn't like it, he'd rather do something else. Fair enough.

I'm curious what you think, Zeilinger being not here, in the face of a suggestion that we take the state to be either thermodynamic or statistical mechanical (i.e. a deterministic evolution of probabilities distributions, without necessarily introducing deterministic trajectories). Part of the suggestion here is to emulate, in a classical setting, the relative lack of metaphysical commitment of, say, the Copenhagen interpretation of QM to anything that we do not record as part of an experiment, which to me particularly includes trajectories.
 
  • #132
Demystifier said:
Superdeterminism by itself is not extreme at all. After all, classical mechanics is also superdeterministic. What is extreme is the idea that superdeterminism may eliminate nonlocality in QM. Namely, superdeterminism alone is not sufficient to eliminate nonlocality. Instead, to eliminate nonlocality, superdeterminism must be combined with a VERY SPECIAL CHOICE OF INITIAL CONDITIONS (see also the post of Dmitry67 above). It is such special conspiratorial initial conditions that is considered extreme.

I don't think that is a completely fair to say that classical mechanics is also superdeterministic, because I do not believe such is the case. If determinism was the same thing as superdeterminism, we would not need a special name for it. So I agree completely with your "extreme" initial conditions requirement at a minimum.

But I also question whether [classical mechanics] + [extreme initial conditions] can ever deliver superdeterminism. In a true superdeterministic theory, you would have an explicit description of the mechanism by which the *grand* conspiracy occurs (the conspiracy to violate Bell inequalities). For example: we could connect Alice's detector setting to a switch controlled by the timing of decays of a radioactive sample. So that is now part of the conspiracy too, and the instructions for when to click or not must be present in that sample (and therefore presumably everywhere). Were that true, why can't we see it before we run the experiment?

As I have said many times: if you allow the superdeterminism "loophole" as a hedge for Bell inequalities, you essentially allow it as a hedge for all physical laws. Which sort of takes the meaning away from it (as a hedge) in the first place.

[I probably shouldn't have even written this post, so my apologies in advance. I consider it akin to false histories (the Omphalos hypothesis) - ad hoc and unfalsifiable.]
 
  • #133
nikman said:
From an interview with Anton Zeilinger:

... If that were all determined, then the laws of nature would only appear to be laws, and
the entire natural sciences would collapse.

http://print.signandsight.com/features/614.html

Thanks for the link! I think his quote says a lot.
 
  • #134
DrChinese said:
But I also question whether [classical mechanics] + [extreme initial conditions] can ever deliver superdeterminism. In a true superdeterministic theory, you would have an explicit description of the mechanism by which the *grand* conspiracy occurs (the conspiracy to violate Bell inequalities).
Part of the conspiracy, at least, comes from the experimenter. One of a specific symmetry class of experimental apparatuses has to be constructed, typically over months, insofar as it used not to be easy to violate Bell inequalities. The material physics that allows us to construct the requisite correlations between measurement results is arguably pretty weird.

Furthermore, the standard way of modeling Bell inequality violating experiments in QM is to introduce projection operators to polarization states of a single frequency mode of light, which are non-local operators. [A propos of which, DrC, do you know of a derivation that is truly careful about the field-theoretic locality?] The QM model, in other words, is essentially a description of steady state, time-independent statistics that has specific symmetry properties. Since I take violation of Bell inequalities to be more about contextuality than about nonlocality, which specifically is implemented by post-selection of a number of sub-ensembles according to what measurement settings were in fact chosen, this seems natural to me, but I wonder what you think?

Remember that with me you have to make a different argument than you might make with someone who thinks the measurement results are noncontextually determined by the state of each of two particles, since for me whether measurement events occur is determined jointly by the measurement devices and the field they are embedded in.
For example: we could connect Alice's detector setting to a switch controlled by the timing of decays of a radioactive sample. So that is now part of the conspiracy too, and the instructions for when to click or not must be present in that sample (and therefore presumably everywhere). Were that true, why can't we see it before we run the experiment?
I do wonder, but apparently that's how the statistics pile up. We have a choice of whether to just say, with Copenhagen, that we can say nothing at all about anything that is not macroscopic, or to consider what properties different types of models have to have in order to "explain" the results. A particle Physicist tells a causal story about what happens in experiments, using particles, anti-particles, and ghost and virtual particles, with various prevarications about what is really meant when one talks about such things (which is typically nonlocal if anything like Wigner's definition of a particle is mentioned, almost inevitably); so it seems reasonable to consider what prevarications there have to be in other kinds of models. It's good that we know moderately well what prevarications we have to introduce in the case of deBB, and that they involve a nonlocal trajectory dynamics in that case.
As I have said many times: if you allow the superdeterminism "loophole" as a hedge for Bell inequalities, you essentially allow it as a hedge for all physical laws. Which sort of takes the meaning away from it (as a hedge) in the first place.
This might be true, I guess, although proving that superdeterminism is a hedge for all possible physical laws looks like tough mathematics to me. Is the same perhaps true for backward causation? Do you think it's an acceptable response to ask what constraints have to be put on superdeterminism (or backward causation) to make it give less away?
[I probably shouldn't have even written this post, so my apologies in advance. I consider it akin to false histories (the Omphalos hypothesis) - ad hoc and unfalsifiable.]
You're always welcome with me, DrC. I'm very pleased with your comments in this case. If you're ever in CT, look me up.
I like the Omphalos. Is it related to the heffalump?

Slightly after the above, I'm particularly struck by your emphasis on the degree of correlation required in the initial conditions to obtain the experimental results we see. Isn't the degree of correlation required in the past precisely the same as the degree of correlation that we note in the records of the experimental data? It's true that the correlations cannot be observed in the past without measurement of the initial state in outrageous detail across the whole of a time-slice of the past light-cone of a measurement event, insofar as there is any degree of dynamical chaos, but that doesn't take away from the fact that in a fine-grained enough description there is no change of entropy. [That last phrase is a bit cryptic, perhaps, but it takes my fancy a little. Measurements now are the same constraint on the state in the past as they are on the state now. Since they are actually observed constraints now, it presumably cannot be denied that they are constraints on the state now. If the actual experimental results look a little weird as constraints that one might invent now, then presumably they look exactly as weird as constraints on the state 10 years ago, no more and no less. As observed constraints, they are constraints on what models have to be like to be empirically adequate.] I'm worried that all this repetition is going to look somewhat blowhard, as it does a little to me now, so I'd be glad if you can tell me if you can see any content in it.
 
  • #135
Peter Morgan said:
Hi Nikman, but note that Zeilinger has limited the discussion to thinking it has to be "complete" determinism. As he says, he can't rule complete determinism out, but he doesn't like it, he'd rather do something else. Fair enough.

I made the mistake of claiming in a post some while back that the Zeilinger group's Leggett paper needs editing (for English clarity) because in its conclusion it seemed to suggest that the authors didn't foreclose even on superdeterminism (or something more or less equivalent). Well, I was wrong; they don't foreclose on it, as AZ makes clear here. He simply finds such a world unimaginable.

I'm curious what you think, Zeilinger being not here, in the face of a suggestion that we take the state to be either thermodynamic or statistical mechanical (i.e. a deterministic evolution of probabilities distributions, without necessarily introducing deterministic trajectories). Part of the suggestion here is to emulate, in a classical setting, the relative lack of metaphysical commitment of, say, the Copenhagen interpretation of QM to anything that we do not record as part of an experiment, which to me particularly includes trajectories.

I'm far more abashed than flattered at being considered an acceptable stand-in to speak for this astonishing, brilliant man. For gosh sakes I'm not even a physicist; I'm at best an 'umble physics groupie.

In this dilettante capacity I'm not aware that he's ever gone as far as (say) Mermin (in the Ithaca Interpretation) and suggested that everything's correlations, dear boy, correlations. What does Bruknerian coarse-grainedness as complementary to decoherence tell us? This is really in part about what macrorealism means, isn't it? Does the GHZ Emptiness of Paths Not Taken have any relevance here?

My understanding via Hans C. von Baeyer is that Brukner and Zeilinger have plotted state evolution in "information space" (in terms of classical mechanics, equivalent to trajectories of billiard balls perhaps) and then translated that into Hilbert space where the math reveals itself to be the Schrödinger equation. How truly deterministic is the SE? My mental clutch is starting to slip now.
 
  • #136
Maaneli said:
I disagree. You replied to someone's suggestion that locality is worth sacrificing for realism, with the claim that Leggett's work shows that even "realism" (no qualifications given about contextuality or non-contextuality) is not tenable without sacrificing another intuitively plausible assumption. But that characterization of Leggett's work is simply not accurate, which anyone can see by reading those abstracts you linked to. And I don't even think that's true that everyone in this field agrees that the word realism is used to imply classical realism, and that this is done without any confusion. I know several active researchers in this field who would dispute the validity of your use of terminology. Moreover, the link you gave to try and support your claim, doesn't actually do that. If you read your own link, you'll see that everything Aspelmeyer and Zeilinger conclude about realism from their experiment is qualified in the final paragraph:

However, Alain Aspect, a physicist who performed the first Bell-type experiment in the 1980s, thinks the team's philosophical conclusions are subjective. "There are other types of non-local models that are not addressed by either Leggett's inequalities or the experiment," he said.

So Aspect is clearly indicating that Aspelmeyer and Zeilinger's use of the word "realism" is intended in a broader sense than Leggett's use of the term "classical realism".



It's not nitpicking on semantics, it's getting the physics straight. If that's too difficult for you to do, then I'm sorry, but maybe you're just not cut out for this thread.


i agree
reality is independence of observers.
 
  • #137
Peter Morgan said:
Part of the conspiracy, at least, comes from the experimenter. One of a specific symmetry class of experimental apparatuses has to be constructed, typically over months, insofar as it used not to be easy to violate Bell inequalities. The material physics that allows us to construct the requisite correlations between measurement results is arguably pretty weird.

Furthermore, the standard way of modeling Bell inequality violating experiments in QM is to introduce projection operators to polarization states of a single frequency mode of light, which are non-local operators. [A propos of which, DrC, do you know of a derivation that is truly careful about the field-theoretic locality?] The QM model, in other words, is essentially a description of steady state, time-independent statistics that has specific symmetry properties. Since I take violation of Bell inequalities to be more about contextuality than about nonlocality, which specifically is implemented by post-selection of a number of sub-ensembles according to what measurement settings were in fact chosen, this seems natural to me, but I wonder what you think?

Remember that with me you have to make a different argument than you might make with someone who thinks the measurement results are noncontextually determined by the state of each of two particles, since for me whether measurement events occur is determined jointly by the measurement devices and the field they are embedded in.

I do wonder, but apparently that's how the statistics pile up. We have a choice of whether to just say, with Copenhagen, that we can say nothing at all about anything that is not macroscopic, or to consider what properties different types of models have to have in order to "explain" the results. A particle Physicist tells a causal story about what happens in experiments, using particles, anti-particles, and ghost and virtual particles, with various prevarications about what is really meant when one talks about such things (which is typically nonlocal if anything like Wigner's definition of a particle is mentioned, almost inevitably); so it seems reasonable to consider what prevarications there have to be in other kinds of models. It's good that we know moderately well what prevarications we have to introduce in the case of deBB, and that they involve a nonlocal trajectory dynamics in that case.

This might be true, I guess, although proving that superdeterminism is a hedge for all possible physical laws looks like tough mathematics to me. Is the same perhaps true for backward causation? Do you think it's an acceptable response to ask what constraints have to be put on superdeterminism (or backward causation) to make it give less away?

You're always welcome with me, DrC. I'm very pleased with your comments in this case. If you're ever in CT, look me up.
I like the Omphalos. Is it related to the heffalump?

Slightly after the above, I'm particularly struck by your emphasis on the degree of correlation required in the initial conditions to obtain the experimental results we see. Isn't the degree of correlation required in the past precisely the same as the degree of correlation that we note in the records of the experimental data? It's true that the correlations cannot be observed in the past without measurement of the initial state in outrageous detail across the whole of a time-slice of the past light-cone of a measurement event, insofar as there is any degree of dynamical chaos, but that doesn't take away from the fact that in a fine-grained enough description there is no change of entropy. [That last phrase is a bit cryptic, perhaps, but it takes my fancy a little. Measurements now are the same constraint on the state in the past as they are on the state now. Since they are actually observed constraints now, it presumably cannot be denied that they are constraints on the state now. If the actual experimental results look a little weird as constraints that one might invent now, then presumably they look exactly as weird as constraints on the state 10 years ago, no more and no less. As observed constraints, they are constraints on what models have to be like to be empirically adequate.] I'm worried that all this repetition is going to look somewhat blowhard, as it does a little to me now, so I'd be glad if you can tell me if you can see any content in it.

We have a lot of jackalopes in Texas, but few heffalumps.

---------------------------------

The issue is this: Bell sets limits on local realistic theories. So there may be several potential "escape" mechanisms. One is non-locality, of which the Bohmian approach is one which attempts to explicitly describe the mechanism by which Bell violations can occur. Detail analysis appears to provide answers to how this could match observation. BM can be explicitly critiqued and answers can be provided to those critiques.

Another is the "superdeterminism" approach. Under this concept, the initial conditions are just such that all experiments which are done will always show Bell violations. However, like the "fair sampling" loophole, the idea is that from the full universe of possible observations - those which are counterfactual - the true rate of coincidence does NOT violate a Bell Inequality. So there is a bias function at work. That bias function distorts the true results because the experimenter's free will is compromised. The experimenter can only select to perform measurements which support QM due to the experimenter's (naive and ignorant) bias.

Now, without regard to the reasonableness of that argument, I point out the following cases, in which the results are identical.

a) The experimenter's detector settings are held constant for a week at a time.
b) The settings are changed at the discretion of the experimenter, at any interval.
c) The settings are changed at due to clicks from a radioactive sample, per an automated system, over which the experimenter has no direct control.
d) A new hypothesis, that the experiments actually show that a Bell Inequality is NOT violated, but the data recording device is modified coincidentally to show results indicating that the Bell Inequality was violated.

In other words, we know we won't see any difference in a), b) and c). And if d) occurred, it would be a different form of "superdeterminism". So the question I am asking: does superdeterminism need to obey any rules? Does it need to be consistent? Does it need to be falsifiable? Because clearly, the a) case above should be enough to rule out superdeterminism (at least in my mind - the experimenter is exercising no ongoing choice past an initial point). The c) case requires that superdeterminism flows from one force to another, when the standard model does not show any such mechanism (since there is no known connection between an experimental optical setting and the timing of radioactive decay). And the d) case shows that there is always one more avenue by which we can float an ad hoc hypothesis.

So you ask: is superdeterminism a hedge for all physical laws? If you allow the above, one might then turn around and say: does it not apply to other physical laws equally? Because my answer is that if so, perhaps relativity is not a true effect - it is simply a manifestation of superdeterminism. All of those GPS satellites... they suffer from the idea that the experimenter is not free to request GPS information freely. So while results appear to follow GR, they really do not. How is this less scientific than the superdeterminism "loophole" as applied to Bell?

In other words, there is no rigorous form of superdeterminism to critique at this point past an ad hoc hypothesis. And we can formulate ad hoc hypotheses about any physical law. None of which will ever have any predictive utility. So I say it is not science in the conventional sense.

-----------------------

You mention contextuality and the subsamples (events actually recorded). And you also mention the "degree of correlation required in the initial conditions to obtain the experimental results we see". The issue I return to time after time: the bias function - the delta between the "true" universe and the observed subsample correlation rates - must itself be a function of the context. But it is sometimes negative and sometimes positive. That seems unreasonable to me. Considering, of course, that the context is ONLY dependent on the relative angle difference and nothing else.

So we need a bias function that eliminates all other variables except the difference between measurement settings at a specific point in time. It must apply to entangled light, which will also show perfect correlations. But it must NOT apply to unentangled light (as you know, that is my criticism of the De Raedt model). And it must further return apparently random values in all cases. I believe these are all valid requirements of a superdeterministic model. As well as locality and realism, of course.
 
  • #138
Continued from above...

So what I am saying is: when you put together all of the requirements, I don't think you have anything that works remaining. You just get arguments that are no better than "last Thursdayism".

------------------------------

By the way, wouldn't GHZ falsify superdeterminism too? After all, there is no subsample.

Or would one make the argument that the experimenter had no free will as to the choice of what to measure? (That seems a stretch, since all observations yield results inconsistent with local realism - at least within experimental limits).
 
  • #139
Demystifier said:
True.


Wrong. Bohmian mechanics is, by definition, a theory of nonlocal realism, so anything which assumes Bohmian mechanics eliminates local realism.


Superdeterminism by itself is not extreme at all. After all, classical mechanics is also superdeterministic. What is extreme is the idea that superdeterminism may eliminate nonlocality in QM. Namely, superdeterminism alone is not sufficient to eliminate nonlocality. Instead, to eliminate nonlocality, superdeterminism must be combined with a VERY SPECIAL CHOICE OF INITIAL CONDITIONS (see also the post of Dmitry67 above). It is such special conspiratorial initial conditions that is considered extreme.

Thank you very much for the explanations
 
  • #140
DrChinese said:
I don't think that is a completely fair to say that classical mechanics is also superdeterministic, because I do not believe such is the case. If determinism was the same thing as superdeterminism, we would not need a special name for it. So I agree completely with your "extreme" initial conditions requirement at a minimum.
I see what you mean, but note that I use a different DEFINITION of the term "superdeterminism". In my language, superdeterminism is nothing but determinism applied to everything. Thus, a classical deterministic model of the world is superdeterministic if one assumes that, according to this model, everything that exists is described by the classical laws of physics. In my language, superdeterminism does not imply the absence of specific laws, such as Newton law of gravitation.

Even with this definition of superdeterminism, it is not exactly the same as determinism. For example, if you believe that the classical laws of physics are valid everywhere except in the brain in which a genuine spiritual free will also acts on electric currents in the brain, then, according to my definition, such a view is deterministic but not superdeterministic.
 
  • #141
akhmeteli said:
I am awfully sorry, I've read your post several times, but I just cannot understand a word.
Ok, I'll try to present the gist of how I've learned to think about this in a less scattered way.

1. Bell locality can be parsed to include statistical independence between A and B.

2. Statistical dependence between A and B is sufficient to cause experimental violation of inequalities which are based on the (formal) assumption of statistical independence between A and B.

3. The statistical dependence is produced via local channels.

4. So, experimental violation of inequalities based on Bell locality doesn't imply nonlocality.

5. Formally, Bell locality entails that the joint probability of the entangled state be factorable into the product of the individual probabilities for A and B.

6. Bell locality is incompatible with the QM requirement that the entangled state representation be nonfactorable.

7. This nonfactorability or quantum nonseparability reflects the (locally produced) statistical dependencies required for the experimental production of entanglement.

8. Experimental loopholes notwithstanding, no Bell local theory can possibly reproduce the full range of QM predictions or experimental results wrt entangled states.

9. None of this implies the existence of nonlocality in Nature -- which is contrary to your idea that, in your words:
akhmeteli said:
Yes, that would certainly be a good evidence of nonlocality (I mean if violations of the genuine Bell inequalities, without loopholes, are demonstrated experimentally).


10. None of this implies that SQM (associated with Bell's theorem) is a nonlocal theory -- which is contrary to your idea that, in your words:
akhmeteli said:
To get such nonlocality in the Bell theorem you need something extra - such as the projection postulate. And this postulate generates nonlocality in a very direct way: indeed, according to this postulate, as soon as you measure a projection of spin of one particle of a singlet, the value of the projection of spin of the other particle immediately becomes determined, no matter how far from each other the particles are, and this is what the Bell theorem is about..


11. In fact, the standard QM methodology and account (including the projection postulate and any quantum level models associated with a particular experimental setup) is based on the (at least tacit, but explicit in the case of some models) assumption that there's a locally produced relationship between quantum disturbances analyzed at spacelike separations. (eg., in the case of Aspect et al experiments using atomic calcium cascades to produce entangled photons, the entangling relationship is assumed to be produced at emission -- and the experimental design must entail statistical dependence between A and B in order to pair photons emitted by the same atom).
 
  • #142
ThomasT said:
Ok, I'll try to present the gist of how I've learned to think about this in a less scattered way.

Thank you very much for your patience with me. At least now I don't feel as if I were trying to decipher a text in double-Dutch:-)

ThomasT said:
3. The statistical dependence is produced via local channels.

What local channels, if there is enough spatial separation?

ThomasT said:
8. Experimental loopholes notwithstanding, no Bell local theory can possibly reproduce the full range of QM predictions or experimental results wrt entangled states.

Again, the fact that local theories cannot reproduce all QM predictions (which include contradictions) cannot be used as an argument against local theories - it's their strong point.




ThomasT said:
11. In fact, the standard QM methodology and account (including the projection postulate and any quantum level models associated with a particular experimental setup) is based on the (at least tacit, but explicit in the case of some models) assumption that there's a locally produced relationship between quantum disturbances analyzed at spacelike separations. (eg., in the case of Aspect et al experiments using atomic calcium cascades to produce entangled photons, the entangling relationship is assumed to be produced at emission -- and the experimental design must entail statistical dependence between A and B in order to pair photons emitted by the same atom).

"the entangling relationship assumed to be produced at emission" is one thing, but the choice of projection of spin or polarization measured at A seems to immediately change the situation at B. If it were indeed so, that would be a problem for locality. At least that's what I tend to think.
 
  • #143
ThomasT said:
9. None of this implies the existence of nonlocality in Nature ...


11. In fact, the standard QM methodology and account (including the projection postulate and any quantum level models associated with a particular experimental setup) is based on the (at least tacit, but explicit in the case of some models) assumption that there's a locally produced relationship between quantum disturbances analyzed at spacelike separations. (eg., in the case of Aspect et al experiments using atomic calcium cascades to produce entangled photons, the entangling relationship is assumed to be produced at emission -- and the experimental design must entail statistical dependence between A and B in order to pair photons emitted by the same atom).

A comment just to make sure everyone is up on some of the refinements to the original Bell test regimen.

We now have the ability to entangle photons that have never met - this is called "entanglement swapping" (ES). Early versions of this protocol did not effectively allow the photons to be created sufficiently far enough to eliminate local interaction, but the newer ones do. For example:

High-fidelity entanglement swapping with fully independent sources
(2009) Rainer Kaltenbaek, Robert Prevedel, Markus Aspelmeyer, Anton Zeilinger

"Entanglement swapping allows to establish entanglement between independent particles that never interacted nor share any common past. This feature makes it an integral constituent of quantum repeaters. Here, we demonstrate entanglement swapping with time-synchronized independent sources with a fi delity high enough to violate a Clauser-Horne-Shimony-Holt inequality by more than four standard deviations. The fact that both entangled pairs are created by fully independent, only electronically connected sources ensures that this technique is suitable for future long-distance quantum communication experiments as well as for novel tests on the foundations of quantum physics."

Note that the experiment in this paper does not actually execute the variation where the photons are never in each other's light cones, but you can be sure that is coming (if not already published).

So basically, you have a pretty difficult time explaining the violation of a Bell Inequality by photon pairs that were never in a common light cone. Without having something being non-local, that is.
It is hard to imagine how you can have
 
  • #144
akhmeteli said:
What local channels, if there is enough spatial separation?
Statistical dependence refers to the fact that a detection at A changes the sample space at B, and vice versa.

This happens during the pairing process via the coincidence circuitry.

All very local, but sufficient to render Bell locality incompatible with QM and entanglement experiments.

akhmeteli said:
Again, the fact that local theories cannot reproduce all QM predictions (which include contradictions) cannot be used as an argument against local theories - it's their strong point.
But QM predictions agree with experimental results, and Bell local theories don't. More importantly, Bell local theories can't possibly agree with experimental results ... ever -- because Bell's formal expression of locality encodes statistical as well as causal independence.

Bell locality contradicts an integral part of entanglement experiments, statistical dependence between A and B. The upside, for LHV advocates, is that this doesn't rule out local realist theories -- just Bell local theories. The downside, for nonlocality advocates, is that this tells us nothing about nonlocality wrt either Nature or standard QM.

akhmeteli said:
... the choice of projection of spin or polarization measured at A seems to immediately change the situation at B. If it were indeed so, that would be a problem for locality.
Yes, that would be a problem for locality. But that's not what standard QM says, and that's not what happens experimentally.
 
  • #145
ThomasT said:
Statistical dependence refers to the fact that a detection at A changes the sample space at B, and vice versa.

This happens during the pairing process via the coincidence circuitry.

All very local, but sufficient to render Bell locality incompatible with QM and entanglement experiments.

But QM predictions agree with experimental results, and Bell local theories don't. More importantly, Bell local theories can't possibly agree with experimental results ... ever -- because Bell's formal expression of locality encodes statistical as well as causal independence.

Bell locality contradicts an integral part of entanglement experiments, statistical dependence between A and B. The upside, for LHV advocates, is that this doesn't rule out local realist theories -- just Bell local theories. The downside, for nonlocality advocates, is that this tells us nothing about nonlocality wrt either Nature or QM.

Yes, that would be a problem for locality. But that's not what QM says, and that's not what happens experimentally.

Sorry, ThomasT, you've lost me again. This time I cannot say I don't understand a word, but 30% is too little for a meaningful discussion - this is a physics forum, not a crossword contest. With all due respect, if you believe you're saying something well-known that I don't know, give me a reference, if not, try to be clearer. And I mean much clearer.
 
  • #146
akhmeteli said:
"the entangling relationship assumed to be produced at emission" is one thing, but the choice of projection of spin or polarization measured at A seems to immediately change the situation at B. If it were indeed so, that would be a problem for locality. At least that's what I tend to think.



long time ago...​

...Quantum mechanics says that there should be a high correlation between results at the polarizers because the photons instantaneously "decide" together which polarization to assume at the moment of measurement, even though they are separated in space. Hidden variables, however, says that such instantaneous decisions are not necessary, because the same strong correlation could be achieved if the photons were somehow informed of the orientation of the polarizers beforehand......

...Quantum mechanics predicts that “non-local” correlations can exist between the particles. This means that if one photon is polarized in, say, the vertical direction, the other will always be polarized in the horizontal direction, no matter how far away it is. However, some physicists argue that this cannot be true and that quantum particles must have local values – known as “hidden variables” – that we cannot measure......




.
 
Last edited:
  • #147
ThomasT said:
But QM predictions agree with experimental results, and Bell local theories don't. More importantly, Bell local theories can't possibly agree with experimental results ... ever -- because Bell's formal expression of locality encodes statistical as well as causal independence.

...

Yes, that would be a problem for locality. But that's not what standard QM says, and that's not what happens experimentally.

These are not standard expressions of theory or experiment. Experimentally: when Alice acts, it appears "as if" the situation changes non-locally for Bob (and vice versa). Theoretically: A Bell local theory is one in which Alice action does not appear "as if" the situation changes at Bob to match UNLESS there is a sub-c channel for propagation (or possibly a common earlier cause within a mutual light cone).
 
  • #148
akhmeteli said:
Sorry, ThomasT, you've lost me again. This time I cannot say I don't understand a word, but 30% is too little for a meaningful discussion - this is a physics forum, not a crossword contest. With all due respect, if you believe you're saying something well-known that I don't know, give me a reference, if not, try to be clearer. And I mean much clearer.
I don't know if it's a well known approach or not.

The argument is that Bell's locality condition isn't, exclusively, a locality condition. If it isn't, then what might this entail wrt the interpretation of experimental violations of inequalities based on Bell locality?

In a nutshell:

Bell locality doesn't just represent causal independence between A and B, but also statistical independence between A and B.

Statistical dependence between A and B means that a detection at A changes the sample space at B, and vice versa. The pairing process entails statistical dependence between A and B, and this statistical dependence can be accounted for via the local transmissions and interactions of the coincidence circuitry.

Statistical dependence between A and B is sufficient to violate inequalities based on Bell locality.

So, experimental violations of inequalities based on Bell locality, while they do rule out Bell local theories, don't imply nonlocality or necessarily rule out local realism.
 
Last edited:
  • #149
DrChinese said:
These are not standard expressions of theory or experiment.
If not, they should be.

DrChinese said:
Experimentally: when Alice acts, it appears "as if" the situation changes non-locally for Bob (and vice versa).
This isn't the way that I've learned to think about it.

DrChinese said:
Theoretically: A Bell local theory is one in which Alice action does not appear "as if" the situation changes at Bob to match UNLESS there is a sub-c channel for propagation (or possibly a common earlier cause within a mutual light cone).
I'm not sure what you're saying. Bell local theories of entangled states don't match QM or experiments, do they?
 
  • #150
ThomasT said:
I'm not sure what you're saying. Bell local theories of entangled states don't match QM or experiments, do they?

Bell local + Bell realistic = ruled out.
 

Similar threads

Replies
4
Views
1K
Replies
58
Views
4K
Replies
6
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 41 ·
2
Replies
41
Views
8K
  • · Replies 7 ·
Replies
7
Views
3K
Replies
2
Views
2K
Replies
63
Views
8K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K