Local realism ruled out? (was: Photon entanglement and )

  • #101
Demystifier said:
I still don't understand your logic. So I'll start with a question. Do you agree that theory and experiment favor nonlocality? (I'm not asking if they definitely prove it, because they don't. I'm only asking if they favor it.)

I will try to answer your question in the evening (Central time zone:-) )

Demystifier said:
If you mean your idea that a single charged particle guided by the wave function can be viewed as being guided by the electromagnetic potential (which is an interesting idea)

Thank you very much, I highly value your opinion.

Demystifier said:
, then it has nothing to do with locality and nonlocality. To say anything about nonlocality, you must consider a system of at least two entangled particles.

I agree, but Maaneli was right - I was not discussing my research, and I did have in mind my post #74 in this thread and the reference there.
 
Physics news on Phys.org
  • #102
RUTA said:
It's possible to have a distant object be brighter than a closer object, e.g., the Sun is much brighter than this computer screen. Likewise the angle subtended by an object doesn't discriminate relative spatial distance. How do you envision relating distance and interaction? And, how do you see your approach giving a Lorentz invariant result, since it can't give a definite spatial separation and be Lorentz invariant?

It is the light which we feel and the light which is then by definition "close". Chains of propagating effect are the meter sticks (and the clocks) of our universe.

Then again the sun IS close in the frame near that of the propagating light, that is to say the events of emission and absorption are distance near zero given the single photon carrier of the propagating effect.

The sun is also intimately close on the scale of the other stars in the universe. But we can also see that on our scale it is big by how it effects so many other systems near us, the light reflecting off the moon, and the planets, their very orbits, tell us that the sun is both big and (relatively) near. Then the (also relative) distance of the sun is to an extent the ratio of its affect on us and the scale of its effect on things near and far to us. This I think is quantifiable at least to the point of ordering which gives us topological structure.

What after all is a measuring rod but a rigid solid, e.g. a condensate of strongly coupled component atoms. The lengths are essentially measured by counting blocks of those atoms and thus the number of interactional links between the ends of the rods.

As we refine our description of interacting phenomena we however (lately) replace the rigid measuring rod with light signals and clocks.

What then is a clock but a series of "tick" events each causing the next and being caused by the previous.

The nullness of space-time distance between emission-absorption events points to that as the elementary unit of measurement, the --by definition-- invariant phenomenon by which all others are given relative scale.

In formulating any operationally meaningful definitions in the context of science we start with the primaries of observations vis a vis causally interacting with one's environment. It is sensible then that all other concepts, including metric distance and time are derivative of causal connection. The mystery to be solved is rather the extent to which mutually interacting systems either accidentally or necessarily resolve themselves into the space-time-field structure we are able to perceive and map with our theories. In doing that I think causality is necessarily local in that localization is necessarily defined causally.

I cannot help but think rejecting local causality in order to preserve a notion of objective reality is backwards.

[EDIT: Ruta, I'm not sure I fully addressed your question. I haven't tried to make the idea formal and quantifiable. More heuristic as I've expressed above. Let me consider it for a bit and see if it can be given a more formal, rigorous encoding... possibly the attempt will show the idea invalid. It should be a useful exercise.]
 
Last edited:
  • #103
jambaugh said:
In formulating any operationally meaningful definitions in the context of science we start with the primaries of observations vis a vis causally interacting with one's environment. It is sensible then that all other concepts, including metric distance and time are derivative of causal connection. The mystery to be solved is rather the extent to which mutually interacting systems either accidentally or necessarily resolve themselves into the space-time-field structure we are able to perceive and map with our theories. In doing that I think causality is necessarily local in that localization is necessarily defined causally.

It seems difficult to define space and time using interacting systems because you need the concepts of space and time to make sense of what you mean by "systems" to begin the process. That is, what you mean by "a system" seems to require trans-temporal identification and to have "two systems" requires spatial separation -- what else would you use to discriminate between otherwise identical systems? That's why we chose a co-definition of space, time and sources (as understood in discrete QFT) as our fundamental operating principle. I look forward to your solution.
 
  • #104
Demystifier said:
I still don't understand your logic. So I'll start with a question. Do you agree that theory and experiment favor nonlocality? (I'm not asking if they definitely prove it, because they don't. I'm only asking if they favor it.)

I know that it is generally recognized that "theory and experiment favor nonlocality". But no, I am afraid I don't agree with that for reasons outlined in my post #1 in this thread.
 
  • #105
akhmeteli said:
I know that it is generally recognized that "theory and experiment favor nonlocality". But no, I am afraid I don't agree with that for reasons outlined in my post #1 in this thread.
Then my next question is: What WOULD you accept as a good argument for nonlocality? For example, if someone would make better detectors with higher efficiency such that the fair sampling loophole is avoided, and if the experiments would still violate Bell inequalities, would you accept THAT as a good evidence for nonlocality?
 
  • #106
jambaugh said:
If we actually (in our conceptual model of how nature works) allow causal feedback, future to past, it seems to me then we must invoke a "meta-time" over which such phenomena would decay out or reinforce to a caustic threshold or stable oscillation, (the local "reality" oscillating w.r.t. this meta-time).
That's interesting, because my explicit Bohmian model of relativistic nonlocal reality does involve a "meta time".

jambaugh said:
The problem as I see it is this sort of speculation is not operationally meaningful. It's no different than supposing an invisible aether, or Everette many worlds. Sure you can speculate but you can't test within the bounds of science. Such phenomena are by their nature beyond observation. Again I see the "reality" of it as meaningless within the context of science. That isn't an argument, just the results of my many internal arguments over past years.
That objection can, of course, be also attributed to the nonrelativistic Bohmian interpretation that does not involve the "meta time".
 
  • #107
Demystifier said:
Then my next question is: What WOULD you accept as a good argument for nonlocality? For example, if someone would make better detectors with higher efficiency such that the fair sampling loophole is avoided, and if the experiments would still violate Bell inequalities, would you accept THAT as a good evidence for nonlocality?

Yes, that would certainly be a good evidence of nonlocality (I mean if violations of the genuine Bell inequalities, without loopholes, are demonstrated experimentally). In that case I would certainly have to reconsider my position. To be frank, I cannot promise I'll reject locality in that case and not free will, for example, but I will certainly have hard time trying to adapt to new reality. The problem is locality will not be the only thing I'll need to reconsider in that case. Such experimental demonstration would also undermine my firm belief in unitary evolution and relativity. And this is in fact the main reason I don't expect any violations of the genuine Bell inequalities.

To give a direct answer to your question "What WOULD I accept as a good argument for nonlocality?", I should also add that experimental demonstration of faster-than-light signaling would certainly be a much more direct and convincing evidence of nonlocality. But again, locality would not be the only casualty of such demonstration. Unitary evolution and relativity would also have hard time trying to survive.
 
  • #109
akhmeteli said:
The problem is locality will not be the only thing I'll need to reconsider in that case. Such experimental demonstration would also undermine my firm belief in unitary evolution and relativity. And this is in fact the main reason I don't expect any violations of the genuine Bell inequalities.

First, Bell tests ARE genuine. I think you mean "loophole" free. All experiments have "loopholes", some are simply more relevant than others - and you are free to your personal opinion. But it is manifestly unfair to characterize the hundreds/thousands of different Bell tests themselves as "not genuine".

Second: that is quite a bold prediction you are making, not sure what would make you think that quantum mechanics is actually incorrect (an absolute deduction from your statement).

And last: why do you need to abandon relativity in the case of a confirmed (for you) violation of a Bell Inequality? The speed of light will still remain a constant in all local reference frames. Mass and clocks will still follow the standard rules. So what changes? The only thing that changes are physical effects not described by relativity in the first place. I do not consider relativity to include the absolute prediction that nonlocal elements cannot exist. I think it is an implied result, and one that could well fit within a larger theory. In fact, that is a result that Demystifier has been expressing for some time.
 
  • #110
RUTA said:
It seems difficult to define space and time using interacting systems because you need the concepts of space and time to make sense of what you mean by "systems" to begin the process. That is, what you mean by "a system" seems to require trans-temporal identification and to have "two systems" requires spatial separation -- what else would you use to discriminate between otherwise identical systems? That's why we chose a co-definition of space, time and sources (as understood in discrete QFT) as our fundamental operating principle. I look forward to your solution.

Well consider for example the entangled electron pair, totally anti-correlated . We typically factor the system into left-moving and right-moving particles (picking our orientation frame appropriately). And we then speak of entanglement of their spins. We could as easily speak of the up z-spin and the down z-spin particle. This is a distinct factorization of the composite system into "two particles". Another distinct factorization is into x-spin up vs down. Each is a different "reality" and the plurality of choices specifically shows our classical bias in thinking of the composite system as two objects. We should rather refer to "a factor" instead of "the component". (And I think equating different factorizations is the principle mistake in parsing the EPR experiment and other entangled systems.)

Now you may argue that spin is also a space-time concept but I could as easily used quark color instead of spin. More to the point, We may find it "difficult to define space and time using interacting systems because" We "need the concepts of space and time to make sense of what [We] mean by 'systems' to begin the process" due to our being space-time entities. That is to say it is a failing of our imagination and artifact of our nature not the universe itself.

Agreed initially we need a concept of time but it need not be metric, only topological and ordered to reflect causal sequence. I can then conceive of a large dimensional quantum system with a complicated random Hamiltonian. (reparametrizing time to make it t independent = pick a t-metric or class of metrics dictated by the dynamics.)

I can also conceive of factoring that system into N 2-dimensional components where 2^N is close to the dimension. Each 2-dim factor has its own U(2)~U(1)xSO(3) structure and I look at the global Hamiltonian and ask what form it takes in terms of internal plus interaction terms. I can then consider different choices of factorization which for the given Hamiltonian might simplify its form.

If I could find some way to formulate an iteration over cases and optimization principle (say minimum sum of component entropies, i.e. minimal entanglement, or otherwise some quantification of symmetry or near-symmetry of the Hamiltonian, or ...) then I might find a global su(2)xsu(2)~so(4) group [so(4) being the compact deformaton if iso(3) of the Euclidean group of spatial geometry. ] naturally emerges for random Hamiltonians under appropriate factorizations and as t increases sufficiently. In short a "natural" condensation into a 3-dimensional space as a spin network and with imperfections effecting e.g. gauge defects. Maybe with some arm-waving and invocation of anthropic principles I could reconstruct the universe in such a fashion.

The question is, for a random large quantum system, can we extrapolate how an entity within that system, able to develop science and formulate physics, would paint his universe. What is the range of possibilities?

I haven't yet of course and such a program may not be "the right way to go about it" (and indeed I can already see many problems) but it is an example of how one might go about constructing/determining spatial structure from scratch. It is not inconceivable to me.
 
  • #111
jambaugh said:
Well consider for example the entangled electron pair, totally anti-correlated. We typically factor the system into left-moving and right-moving particles (picking our orientation frame appropriately). And we then speak of entanglement of their spins. We could as easily speak of the up z-spin and the down z-spin particle. This is a distinct factorization of the composite system into "two particles". Another distinct factorization is into x-spin up vs down. Each is a different "reality" and the plurality of choices specifically shows our classical bias in thinking of the composite system as two objects. We should rather refer to "a factor" instead of "the component". (And I think equating different factorizations is the principle mistake in parsing the EPR experiment and other entangled systems.)

You've snuck spatiality in the backdoor -- you need two experimental outcomes, so you need two detectors. You don't need to talk about spatiality in the context of a "quantum system," but you do need those detectors. And, of course, you need to define what you mean by "up" and "down" outcomes in the context of those detectors. [In fact, we don't have any graphical counterpart to "quantum systems" in our approach.]

jambaugh said:
Now you may argue that spin is also a space-time concept but I could as easily used quark color instead of spin. More to the point, We may find it "difficult to define space and time using interacting systems because" We "need the concepts of space and time to make sense of what [We] mean by 'systems' to begin the process" due to our being space-time entities. That is to say it is a failing of our imagination and artifact of our nature not the universe itself.

Moving to charge doesn't help -- you need "some thing" to "possess" the charge, even if you attribute it to the detectors. So, again, how do you distinquish two such otherwise identical "things" without space?

jambaugh said:
Agreed initially we need a concept of time but it need not be metric, only topological and ordered to reflect causal sequence. I can then conceive of a large dimensional quantum system with a complicated random Hamiltonian. (reparametrizing time to make it t independent = pick a t-metric or class of metrics dictated by the dynamics.)

Exactly what we concluded, "time" is inextricably linked to what we mean by "things" (discrete QFT sources for us). This is topological not geometric as you say. Now are you going to argue that time is "special" in this sense over "space?" That is, we "need" a notion of temporality at the topological level but not space?

jambaugh said:
I can also conceive of factoring that system into N 2-dimensional components where 2^N is close to the dimension. Each 2-dim factor has its own U(2)~U(1)xSO(3) structure and I look at the global Hamiltonian and ask what form it takes in terms of internal plus interaction terms. I can then consider different choices of factorization which for the given Hamiltonian might simplify its form.

Interaction between ... ? Again, more than one "thing" will require some form of differentiation. Are you saying you will have a theoretical counterpart to every particle in the universe? That is, you can't talk about electrons, quarks, muons, ... in general?

jambaugh said:
I haven't yet of course and such a program may not be "the right way to go about it" (and indeed I can already see many problems) but it is an example of how one might go about constructing/determining spatial structure from scratch. It is not inconceivable to me.

I don't see, as I argue above, that you've succeeded even conceptually. You need the notions of identification and differentiation to have "things."
 
  • #112
Demystifier said:
Akhmeteli, that seems to be a reasonable answer. However, I think that nonlocality is compatible with relativity and unitary evolution. For more details see
https://www.physicsforums.com/showthread.php?t=354083
especially posts #1 and #109. I would like to see your opinion on that.

Dear Demystifier,

I did not say that "nonlocality is incompatible with relativity and unitary evolution". Indeed, tachyons are thinkable. However, it seems to me that relativity and unitary evolution in their current form leave little space for nonlocality. I remember studying quantum field theory many years ago. The lecturer was Professor Shirkov. Of course, we used his well-known book (N N Bogolyubov and D V Shirkov, `Introduction to the Theory of Quantized Fields'). One of the basic principles used in that book was microcausality. So I tend to believe nonlocality would lead to completely different forms of unitary evolution and relativity (for example, one of such new form may require tachyons). Explicit or implicit faster-than-light signaling does not follow from the current form of unitary evolution and relativity. To get such nonlocality in the Bell theorem you need something extra - such as the projection postulate. And this postulate generates nonlocality in a very direct way: indeed, according to this postulate, as soon as you measure a projection of spin of one particle of a singlet, the value of the projection of spin of the other particle immediately becomes determined, no matter how far from each other the particles are, and this is what the Bell theorem is about..

I looked at the references you gave. Again, I agree that unitary evolution and relativity, strictly speaking, do not eliminate nonlocality. However I wanted to ask you something. If I am not mistaken, you mentioned recently that Bohm's theory is superdeterministic.That seems reasonable. Furthermore, maybe unitary evolution is also, strictly speaking, superdeterministic. Indeed, it can include all observers and instruments, at least in principle. So my question is: What does this mean for nonlocality of Bohm's theory?
 
  • #113
Demystifier said:
Akhmeteli, that seems to be a reasonable answer. However, I think that
nonlocality is compatible with relativity and unitary evolution.
For more details see
https://www.physicsforums.com/showthread.php?t=354083

i think the same.

yoda jedi said:


specifically:

Tumulka:
http://arxiv.org/PS_cache/quant-ph/pdf/0406/0406094v2.pdf
and
http://arxiv.org/PS_cache/quant-ph/pdf/0602/0602208v2.pdf




Bedingham:
http://arxiv.org/PS_cache/arxiv/pdf/0907/0907.2327v1.pdf
 
  • #114
Demystifier said:
... if someone would make better detectors with higher efficiency such that the fair sampling loophole is avoided, and if the experiments would still violate Bell inequalities, would you accept THAT as a good evidence for nonlocality?
No, of course not.

I asked in a previous post:
Is Bell's theorem about the way the quantum world is, or is it about limitations on the formalization of entangled states?
The formalism is in effect modelling, and must be compatible with, the experimental design(s) that it's associated with.

Quantum nonseparability, vis the SQM representation, has to do with the nonfactorability of entangled state representations, which reflects the necessary statistical dependency between A and B -- not some property of the underlying quantum world.

The predictions of Bell LHV models (characterized by their incorporation of the Bell locality condition, ie. factorability of the joint entangled state representation) don't fully agree with experimental results precisely because these models are incompatible with the salient feature of experiments designed to produce entanglement, namely statistical dependence between A and B.

And, the statistical dependence between A and B is produced solely vis the local transmissions and interactions involved in the pairing process.

So, the incompatibility of Bell LHV models with SQM and the experimental violation of Bell inequalities has nothing to do with nonlocality in Nature.

It might also be noted that calling SQM a local or nonlocal theory (whether due to Bell associated considerations or some interpretation of the formalism by itself) is more obfuscating than enlightening.
 
Last edited:
  • #116
Demystifier said:
That's interesting, because my explicit Bohmian model of relativistic nonlocal reality does involve a "meta time".
...
That objection can, of course, be also attributed to the nonrelativistic Bohmian interpretation that does not involve the "meta time".

Yes I can see how the presence/absence of a meta-time would fit in and I don't object to its invocation per se. I see e.g. BI (and MW) not so much as an interpretation as it is a model given as I argue it is invoking non-operational components.

Thus if one were to simply drop the word "interpretation" from BI I'd be all for it.

Acknowledged as such, I think Bohmian QM could be a nice tool comparable to e.g. treating space-time as a dynamic manifold with its own meta-time and meta-dynamics to which it must relax to a stationary state yielding a solution of Einstein's equations. I don't have to assert the "reality" of extra dimensions or that meta-time in which space-time is embedded to use the model as a tool for calculation and enumeration of cases.

But I find "reality" is inherently a classical concept, and indeed the epitome of classical-ness. I see trying to hold onto the "reality" part of the negated local reality as regressive. (and should be replaced with non-objective "actuality".) That's a somewhat intuitive judgment of course but I believe based on good heuristic principles.
 
  • #117
akhmeteli said:
Yes, that would certainly be a good evidence of nonlocality (I mean if violations of the genuine Bell inequalities, without loopholes, are demonstrated experimentally).
Experimental loopholes have nothing to do with it. Bell's LHV ansatz is incompatible with QM, because QM, a statistical theory, correctly models the statistical dependency between A and B of the entangled state (vis nonfactorability of the joint state representation) while Bell's formulation doesn't.

akhmetli said:
To get such nonlocality in the Bell theorem you need something extra - such as the projection postulate. And this postulate generates nonlocality in a very direct way: indeed, according to this postulate, as soon as you measure a projection of spin of one particle of a singlet, the value of the projection of spin of the other particle immediately becomes determined, no matter how far from each other the particles are, and this is what the Bell theorem is about..
The assumption underlying the projection postulate is that what is being jointly analyzed at A and B during the same coincidence interval is the same thing. Where's the nonlocality?
 
  • #118
DrChinese said:
First, Bell tests ARE genuine. I think you mean "loophole" free. All experiments have "loopholes", some are simply more relevant than others - and you are free to your personal opinion. But it is manifestly unfair to characterize the hundreds/thousands of different Bell tests themselves as "not genuine".

Thank you for your comments.

I did not say the tests were not genuine. I just did not say that. However, the Bell inequalities violated in those tests were not genuine, i.e. those defined in the Bell theorem, because either they were doctored using the fair sampling assumption or the spatial separation was not sufficient. So I insist that genuine Bell inequalities were not violated in those experiments, and this is not just my opinion, this is mainstream (I admit that, strictly speaking, there is no consensus on that as you strongly disagree:-) )

DrChinese said:
Second: that is quite a bold prediction you are making, not sure what would make you think that quantum mechanics is actually incorrect (an absolute deduction from your statement).

What makes me think that is the fact that unitary evolution and the projection postulate contradict each other, so they cannot be both correct.

DrChinese said:
And last: why do you need to abandon relativity in the case of a confirmed (for you) violation of a Bell Inequality? The speed of light will still remain a constant in all local reference frames. Mass and clocks will still follow the standard rules. So what changes? The only thing that changes are physical effects not described by relativity in the first place. I do not consider relativity to include the absolute prediction that nonlocal elements cannot exist. I think it is an implied result, and one that could well fit within a larger theory. In fact, that is a result that Demystifier has been expressing for some time.

I answered this question replying to Demystifier. In brief, I admit that relativity and nonlocality, strictly speaking, are not incompatible, but I tend to believe that relativity and unitary evolution in their current form do not suggest nonlocality.
 
  • #119
yoda jedi said:
i think the same.

Please see my answers to Demystifier and DrChinese
 
  • #120
ThomasT said:
Experimental loopholes have nothing to do with it. Bell's LHV ansatz is incompatible with QM, because QM, a statistical theory, correctly models the statistical dependency between A and B of the entangled state (vis nonfactorability of the joint state representation) while Bell's formulation doesn't.

The assumption underlying the projection postulate is that what is being jointly analyzed at A and B during the same coincidence interval is the same thing. Where's the nonlocality?

Dear ThomasT,

I am awfully sorry, I've read your post several times, but I just cannot understand a word.
 
  • #121
akhmeteli said:
If I am not mistaken, you mentioned recently that Bohm's theory is superdeterministic.That seems reasonable. Furthermore, maybe unitary evolution is also, strictly speaking, superdeterministic. Indeed, it can include all observers and instruments, at least in principle. So my question is: What does this mean for nonlocality of Bohm's theory?
Bohmian mechanics is both superdeterministic and nonlocal. It should not be surprising, because Bohmian mechanics uses the wave function, and wave function is a nonlocal and deterministic object.
 
  • #122
Demystifier said:
Bohmian mechanics is both superdeterministic and nonlocal. It should not be surprising, because Bohmian mechanics uses the wave function, and wave function is a nonlocal and deterministic object.

I have not given much thought to superdeterminism, so please forgive me if the following question will be downright stupid.

My understanding is that superdeterminism rejects free will. So it looks like, from the point of view of Bohmian mechanics, no possible results of Bell tests can eliminate local realism, because there is no free will anyway? I know that, Bohmian mechanics or not, the "superdeterminism hole" cannot be eliminated in Bell tests, but superdeterminism is typically considered a pretty extreme notion, and now it turns out it is alive and kicking in such a relatively established approach as Bohmian?
 
  • #123
I as understand, just superdeterminism is not enough to create a loophole in Bells. In addition to superdeterminism, we also need an evil Nature, positioned BM particles in advance in a very special way, to trick the scientists and laugh at them.

In some sense that loophole is like 'Boltzmann brain' which also can not be ruled out. BTW, 'Boltzmann brain' agrument can be used even to deny QM at whole: world is just Newtonian, but 'Boltzmann brain' has memories that QM was discovered and experimentally verified.
 
  • #124
Demystifier said:
Bohmian mechanics is both superdeterministic and nonlocal. It should not be surprising, because Bohmian mechanics uses the wave function, and wave function is a nonlocal and deterministic object.
That's a useful observation. It's obvious, as you say, if you think of it. Thanks.

Do you have a view of how this meshes with arguments about free will, or do you think the issue of free will is overblown?
 
  • #126
akhmeteli said:
My understanding is that superdeterminism rejects free will.
True.

akhmeteli said:
So it looks like, from the point of view of Bohmian mechanics, no possible results of Bell tests can eliminate local realism, because there is no free will anyway?
Wrong. Bohmian mechanics is, by definition, a theory of nonlocal realism, so anything which assumes Bohmian mechanics eliminates local realism.

akhmeteli said:
I know that, Bohmian mechanics or not, the "superdeterminism hole" cannot be eliminated in Bell tests, but superdeterminism is typically considered a pretty extreme notion, and now it turns out it is alive and kicking in such a relatively established approach as Bohmian?
Superdeterminism by itself is not extreme at all. After all, classical mechanics is also superdeterministic. What is extreme is the idea that superdeterminism may eliminate nonlocality in QM. Namely, superdeterminism alone is not sufficient to eliminate nonlocality. Instead, to eliminate nonlocality, superdeterminism must be combined with a VERY SPECIAL CHOICE OF INITIAL CONDITIONS (see also the post of Dmitry67 above). It is such special conspiratorial initial conditions that is considered extreme.
 
  • #127
Demystifier said:
In my opinion, free will is only an illusion. See the attachment in
https://www.physicsforums.com/showpost.php?p=2455753&postcount=109
Fair enough, given the just-hedged-enough nature of "you think that you have free will. But it may only be an illusion". For me, I'm not willing to make strong claims on something that appears not to be so easily looked at experimentally, but OK, if we have the hedge.
 
  • #128
Peter Morgan said:
Fair enough, given the just-hedged-enough nature of "you think that you have free will. But it may only be an illusion". For me, I'm not willing to make strong claims on something that appears not to be so easily looked at experimentally, but OK, if we have the hedge.
I'm glad to see that we (you and me) think similarly.
 
  • #129
Demystifier said:
After all, classical mechanics is also superdeterministic.
Right.
What is extreme is the idea that superdeterminism may eliminate nonlocality in QM. Namely, superdeterminism alone is not sufficient to eliminate nonlocality. Instead, to eliminate nonlocality, superdeterminism must be combined with a VERY SPECIAL CHOICE OF INITIAL CONDITIONS (see also the post of Dmitry67 above). It is such special conspiratorial initial conditions that is considered extreme.
The "very special"ness is only that, given that the state of the whole experimental apparatus at the times that simultaneous events were recorded, together with the instrument settings at the time, were what they were, the state of the whole experimental apparatus and its whole past light cone at some point in the past must have been consistent with the state that we observed. From a classical deterministic dynamics point of view, this is only to say that the initial conditions now determine the initial conditions at past times (and at future times).

A thermodynamic or statistical mechanical point of view of what the state is, however, places a less stringent requirement that the thermodynamic or statistical mechanical state in the past must have been consistent with the recorded measurements that we make now. An experiment that violates Bell-CHSH inequalities makes a record, typically, of a few million events that are identified as "pairs", which is not a very tight constraint on what the state of the universe was in the backward light-cone a year ago. A probabilistic dynamics, such as that of QM, only claims that the statistics that are observed now on various ensembles of data constrain what the statistics in the past would have been if we had measured them. This kind of move to probabilistic dynamics is as open to classical modeling in space-time as it is to QM, in which we make the superdeterminism apply only to probability distributions instead of to deterministic states. To some extent this move suggests giving up particle trajectories, but of course trajectories can be added that are consistent with the probabilistic dynamics of QM, in several ways, at least including deBB, Nelson, and SED (insofar as the trajectories that we choose to add are beyond being looked at by experiment, however, we should perhaps be metaphysically rather noncommittal).
 
  • #130
From an interview with Anton Zeilinger:

I'd like to come back to these freedoms. First, if you assumed there were no freedom
of the will – and there are said to be people who take this position – then you could
do away with all the craziness of quantum mechanics in one go.


True – but only if you assume a completely determined world where everything that
happened, absolutely everything, were fixed in a vast network of cause and effect.
Then sometime in the past there would be an event that determined both my choice of
the measuring instrument and the particle's behaviour. Then my choice would no
longer be a choice, the random accident would be no accident and the action at a
distance would not be action at a distance.

Could you get used to such an idea?

I can't rule out that the world is in fact like that. But for me the freedom to ask
questions to nature is one of the most essential achievements of natural science. It's a
discovery of the Renaissance. For the philosophers and theologians of the time, it
must have seemed incredibly presumptuousness that people suddenly started
carrying out experiments and asking questions of nature and deducing laws of nature,
which are in fact the business of God. For me every experiment stands or falls with
the fact that I'm free to ask the questions and carry out the measurements I want. If
that were all determined, then the laws of nature would only appear to be laws, and
the entire natural sciences would collapse.

http://print.signandsight.com/features/614.html
 
  • #131
Hi Nikman, but note that Zeilinger has limited the discussion to thinking it has to be "complete" determinism. As he says, he can't rule complete determinism out, but he doesn't like it, he'd rather do something else. Fair enough.

I'm curious what you think, Zeilinger being not here, in the face of a suggestion that we take the state to be either thermodynamic or statistical mechanical (i.e. a deterministic evolution of probabilities distributions, without necessarily introducing deterministic trajectories). Part of the suggestion here is to emulate, in a classical setting, the relative lack of metaphysical commitment of, say, the Copenhagen interpretation of QM to anything that we do not record as part of an experiment, which to me particularly includes trajectories.
 
  • #132
Demystifier said:
Superdeterminism by itself is not extreme at all. After all, classical mechanics is also superdeterministic. What is extreme is the idea that superdeterminism may eliminate nonlocality in QM. Namely, superdeterminism alone is not sufficient to eliminate nonlocality. Instead, to eliminate nonlocality, superdeterminism must be combined with a VERY SPECIAL CHOICE OF INITIAL CONDITIONS (see also the post of Dmitry67 above). It is such special conspiratorial initial conditions that is considered extreme.

I don't think that is a completely fair to say that classical mechanics is also superdeterministic, because I do not believe such is the case. If determinism was the same thing as superdeterminism, we would not need a special name for it. So I agree completely with your "extreme" initial conditions requirement at a minimum.

But I also question whether [classical mechanics] + [extreme initial conditions] can ever deliver superdeterminism. In a true superdeterministic theory, you would have an explicit description of the mechanism by which the *grand* conspiracy occurs (the conspiracy to violate Bell inequalities). For example: we could connect Alice's detector setting to a switch controlled by the timing of decays of a radioactive sample. So that is now part of the conspiracy too, and the instructions for when to click or not must be present in that sample (and therefore presumably everywhere). Were that true, why can't we see it before we run the experiment?

As I have said many times: if you allow the superdeterminism "loophole" as a hedge for Bell inequalities, you essentially allow it as a hedge for all physical laws. Which sort of takes the meaning away from it (as a hedge) in the first place.

[I probably shouldn't have even written this post, so my apologies in advance. I consider it akin to false histories (the Omphalos hypothesis) - ad hoc and unfalsifiable.]
 
  • #133
nikman said:
From an interview with Anton Zeilinger:

... If that were all determined, then the laws of nature would only appear to be laws, and
the entire natural sciences would collapse.

http://print.signandsight.com/features/614.html

Thanks for the link! I think his quote says a lot.
 
  • #134
DrChinese said:
But I also question whether [classical mechanics] + [extreme initial conditions] can ever deliver superdeterminism. In a true superdeterministic theory, you would have an explicit description of the mechanism by which the *grand* conspiracy occurs (the conspiracy to violate Bell inequalities).
Part of the conspiracy, at least, comes from the experimenter. One of a specific symmetry class of experimental apparatuses has to be constructed, typically over months, insofar as it used not to be easy to violate Bell inequalities. The material physics that allows us to construct the requisite correlations between measurement results is arguably pretty weird.

Furthermore, the standard way of modeling Bell inequality violating experiments in QM is to introduce projection operators to polarization states of a single frequency mode of light, which are non-local operators. [A propos of which, DrC, do you know of a derivation that is truly careful about the field-theoretic locality?] The QM model, in other words, is essentially a description of steady state, time-independent statistics that has specific symmetry properties. Since I take violation of Bell inequalities to be more about contextuality than about nonlocality, which specifically is implemented by post-selection of a number of sub-ensembles according to what measurement settings were in fact chosen, this seems natural to me, but I wonder what you think?

Remember that with me you have to make a different argument than you might make with someone who thinks the measurement results are noncontextually determined by the state of each of two particles, since for me whether measurement events occur is determined jointly by the measurement devices and the field they are embedded in.
For example: we could connect Alice's detector setting to a switch controlled by the timing of decays of a radioactive sample. So that is now part of the conspiracy too, and the instructions for when to click or not must be present in that sample (and therefore presumably everywhere). Were that true, why can't we see it before we run the experiment?
I do wonder, but apparently that's how the statistics pile up. We have a choice of whether to just say, with Copenhagen, that we can say nothing at all about anything that is not macroscopic, or to consider what properties different types of models have to have in order to "explain" the results. A particle Physicist tells a causal story about what happens in experiments, using particles, anti-particles, and ghost and virtual particles, with various prevarications about what is really meant when one talks about such things (which is typically nonlocal if anything like Wigner's definition of a particle is mentioned, almost inevitably); so it seems reasonable to consider what prevarications there have to be in other kinds of models. It's good that we know moderately well what prevarications we have to introduce in the case of deBB, and that they involve a nonlocal trajectory dynamics in that case.
As I have said many times: if you allow the superdeterminism "loophole" as a hedge for Bell inequalities, you essentially allow it as a hedge for all physical laws. Which sort of takes the meaning away from it (as a hedge) in the first place.
This might be true, I guess, although proving that superdeterminism is a hedge for all possible physical laws looks like tough mathematics to me. Is the same perhaps true for backward causation? Do you think it's an acceptable response to ask what constraints have to be put on superdeterminism (or backward causation) to make it give less away?
[I probably shouldn't have even written this post, so my apologies in advance. I consider it akin to false histories (the Omphalos hypothesis) - ad hoc and unfalsifiable.]
You're always welcome with me, DrC. I'm very pleased with your comments in this case. If you're ever in CT, look me up.
I like the Omphalos. Is it related to the heffalump?

Slightly after the above, I'm particularly struck by your emphasis on the degree of correlation required in the initial conditions to obtain the experimental results we see. Isn't the degree of correlation required in the past precisely the same as the degree of correlation that we note in the records of the experimental data? It's true that the correlations cannot be observed in the past without measurement of the initial state in outrageous detail across the whole of a time-slice of the past light-cone of a measurement event, insofar as there is any degree of dynamical chaos, but that doesn't take away from the fact that in a fine-grained enough description there is no change of entropy. [That last phrase is a bit cryptic, perhaps, but it takes my fancy a little. Measurements now are the same constraint on the state in the past as they are on the state now. Since they are actually observed constraints now, it presumably cannot be denied that they are constraints on the state now. If the actual experimental results look a little weird as constraints that one might invent now, then presumably they look exactly as weird as constraints on the state 10 years ago, no more and no less. As observed constraints, they are constraints on what models have to be like to be empirically adequate.] I'm worried that all this repetition is going to look somewhat blowhard, as it does a little to me now, so I'd be glad if you can tell me if you can see any content in it.
 
  • #135
Peter Morgan said:
Hi Nikman, but note that Zeilinger has limited the discussion to thinking it has to be "complete" determinism. As he says, he can't rule complete determinism out, but he doesn't like it, he'd rather do something else. Fair enough.

I made the mistake of claiming in a post some while back that the Zeilinger group's Leggett paper needs editing (for English clarity) because in its conclusion it seemed to suggest that the authors didn't foreclose even on superdeterminism (or something more or less equivalent). Well, I was wrong; they don't foreclose on it, as AZ makes clear here. He simply finds such a world unimaginable.

I'm curious what you think, Zeilinger being not here, in the face of a suggestion that we take the state to be either thermodynamic or statistical mechanical (i.e. a deterministic evolution of probabilities distributions, without necessarily introducing deterministic trajectories). Part of the suggestion here is to emulate, in a classical setting, the relative lack of metaphysical commitment of, say, the Copenhagen interpretation of QM to anything that we do not record as part of an experiment, which to me particularly includes trajectories.

I'm far more abashed than flattered at being considered an acceptable stand-in to speak for this astonishing, brilliant man. For gosh sakes I'm not even a physicist; I'm at best an 'umble physics groupie.

In this dilettante capacity I'm not aware that he's ever gone as far as (say) Mermin (in the Ithaca Interpretation) and suggested that everything's correlations, dear boy, correlations. What does Bruknerian coarse-grainedness as complementary to decoherence tell us? This is really in part about what macrorealism means, isn't it? Does the GHZ Emptiness of Paths Not Taken have any relevance here?

My understanding via Hans C. von Baeyer is that Brukner and Zeilinger have plotted state evolution in "information space" (in terms of classical mechanics, equivalent to trajectories of billiard balls perhaps) and then translated that into Hilbert space where the math reveals itself to be the Schrödinger equation. How truly deterministic is the SE? My mental clutch is starting to slip now.
 
  • #136
Maaneli said:
I disagree. You replied to someone's suggestion that locality is worth sacrificing for realism, with the claim that Leggett's work shows that even "realism" (no qualifications given about contextuality or non-contextuality) is not tenable without sacrificing another intuitively plausible assumption. But that characterization of Leggett's work is simply not accurate, which anyone can see by reading those abstracts you linked to. And I don't even think that's true that everyone in this field agrees that the word realism is used to imply classical realism, and that this is done without any confusion. I know several active researchers in this field who would dispute the validity of your use of terminology. Moreover, the link you gave to try and support your claim, doesn't actually do that. If you read your own link, you'll see that everything Aspelmeyer and Zeilinger conclude about realism from their experiment is qualified in the final paragraph:

However, Alain Aspect, a physicist who performed the first Bell-type experiment in the 1980s, thinks the team's philosophical conclusions are subjective. "There are other types of non-local models that are not addressed by either Leggett's inequalities or the experiment," he said.

So Aspect is clearly indicating that Aspelmeyer and Zeilinger's use of the word "realism" is intended in a broader sense than Leggett's use of the term "classical realism".



It's not nitpicking on semantics, it's getting the physics straight. If that's too difficult for you to do, then I'm sorry, but maybe you're just not cut out for this thread.


i agree
reality is independence of observers.
 
  • #137
Peter Morgan said:
Part of the conspiracy, at least, comes from the experimenter. One of a specific symmetry class of experimental apparatuses has to be constructed, typically over months, insofar as it used not to be easy to violate Bell inequalities. The material physics that allows us to construct the requisite correlations between measurement results is arguably pretty weird.

Furthermore, the standard way of modeling Bell inequality violating experiments in QM is to introduce projection operators to polarization states of a single frequency mode of light, which are non-local operators. [A propos of which, DrC, do you know of a derivation that is truly careful about the field-theoretic locality?] The QM model, in other words, is essentially a description of steady state, time-independent statistics that has specific symmetry properties. Since I take violation of Bell inequalities to be more about contextuality than about nonlocality, which specifically is implemented by post-selection of a number of sub-ensembles according to what measurement settings were in fact chosen, this seems natural to me, but I wonder what you think?

Remember that with me you have to make a different argument than you might make with someone who thinks the measurement results are noncontextually determined by the state of each of two particles, since for me whether measurement events occur is determined jointly by the measurement devices and the field they are embedded in.

I do wonder, but apparently that's how the statistics pile up. We have a choice of whether to just say, with Copenhagen, that we can say nothing at all about anything that is not macroscopic, or to consider what properties different types of models have to have in order to "explain" the results. A particle Physicist tells a causal story about what happens in experiments, using particles, anti-particles, and ghost and virtual particles, with various prevarications about what is really meant when one talks about such things (which is typically nonlocal if anything like Wigner's definition of a particle is mentioned, almost inevitably); so it seems reasonable to consider what prevarications there have to be in other kinds of models. It's good that we know moderately well what prevarications we have to introduce in the case of deBB, and that they involve a nonlocal trajectory dynamics in that case.

This might be true, I guess, although proving that superdeterminism is a hedge for all possible physical laws looks like tough mathematics to me. Is the same perhaps true for backward causation? Do you think it's an acceptable response to ask what constraints have to be put on superdeterminism (or backward causation) to make it give less away?

You're always welcome with me, DrC. I'm very pleased with your comments in this case. If you're ever in CT, look me up.
I like the Omphalos. Is it related to the heffalump?

Slightly after the above, I'm particularly struck by your emphasis on the degree of correlation required in the initial conditions to obtain the experimental results we see. Isn't the degree of correlation required in the past precisely the same as the degree of correlation that we note in the records of the experimental data? It's true that the correlations cannot be observed in the past without measurement of the initial state in outrageous detail across the whole of a time-slice of the past light-cone of a measurement event, insofar as there is any degree of dynamical chaos, but that doesn't take away from the fact that in a fine-grained enough description there is no change of entropy. [That last phrase is a bit cryptic, perhaps, but it takes my fancy a little. Measurements now are the same constraint on the state in the past as they are on the state now. Since they are actually observed constraints now, it presumably cannot be denied that they are constraints on the state now. If the actual experimental results look a little weird as constraints that one might invent now, then presumably they look exactly as weird as constraints on the state 10 years ago, no more and no less. As observed constraints, they are constraints on what models have to be like to be empirically adequate.] I'm worried that all this repetition is going to look somewhat blowhard, as it does a little to me now, so I'd be glad if you can tell me if you can see any content in it.

We have a lot of jackalopes in Texas, but few heffalumps.

---------------------------------

The issue is this: Bell sets limits on local realistic theories. So there may be several potential "escape" mechanisms. One is non-locality, of which the Bohmian approach is one which attempts to explicitly describe the mechanism by which Bell violations can occur. Detail analysis appears to provide answers to how this could match observation. BM can be explicitly critiqued and answers can be provided to those critiques.

Another is the "superdeterminism" approach. Under this concept, the initial conditions are just such that all experiments which are done will always show Bell violations. However, like the "fair sampling" loophole, the idea is that from the full universe of possible observations - those which are counterfactual - the true rate of coincidence does NOT violate a Bell Inequality. So there is a bias function at work. That bias function distorts the true results because the experimenter's free will is compromised. The experimenter can only select to perform measurements which support QM due to the experimenter's (naive and ignorant) bias.

Now, without regard to the reasonableness of that argument, I point out the following cases, in which the results are identical.

a) The experimenter's detector settings are held constant for a week at a time.
b) The settings are changed at the discretion of the experimenter, at any interval.
c) The settings are changed at due to clicks from a radioactive sample, per an automated system, over which the experimenter has no direct control.
d) A new hypothesis, that the experiments actually show that a Bell Inequality is NOT violated, but the data recording device is modified coincidentally to show results indicating that the Bell Inequality was violated.

In other words, we know we won't see any difference in a), b) and c). And if d) occurred, it would be a different form of "superdeterminism". So the question I am asking: does superdeterminism need to obey any rules? Does it need to be consistent? Does it need to be falsifiable? Because clearly, the a) case above should be enough to rule out superdeterminism (at least in my mind - the experimenter is exercising no ongoing choice past an initial point). The c) case requires that superdeterminism flows from one force to another, when the standard model does not show any such mechanism (since there is no known connection between an experimental optical setting and the timing of radioactive decay). And the d) case shows that there is always one more avenue by which we can float an ad hoc hypothesis.

So you ask: is superdeterminism a hedge for all physical laws? If you allow the above, one might then turn around and say: does it not apply to other physical laws equally? Because my answer is that if so, perhaps relativity is not a true effect - it is simply a manifestation of superdeterminism. All of those GPS satellites... they suffer from the idea that the experimenter is not free to request GPS information freely. So while results appear to follow GR, they really do not. How is this less scientific than the superdeterminism "loophole" as applied to Bell?

In other words, there is no rigorous form of superdeterminism to critique at this point past an ad hoc hypothesis. And we can formulate ad hoc hypotheses about any physical law. None of which will ever have any predictive utility. So I say it is not science in the conventional sense.

-----------------------

You mention contextuality and the subsamples (events actually recorded). And you also mention the "degree of correlation required in the initial conditions to obtain the experimental results we see". The issue I return to time after time: the bias function - the delta between the "true" universe and the observed subsample correlation rates - must itself be a function of the context. But it is sometimes negative and sometimes positive. That seems unreasonable to me. Considering, of course, that the context is ONLY dependent on the relative angle difference and nothing else.

So we need a bias function that eliminates all other variables except the difference between measurement settings at a specific point in time. It must apply to entangled light, which will also show perfect correlations. But it must NOT apply to unentangled light (as you know, that is my criticism of the De Raedt model). And it must further return apparently random values in all cases. I believe these are all valid requirements of a superdeterministic model. As well as locality and realism, of course.
 
  • #138
Continued from above...

So what I am saying is: when you put together all of the requirements, I don't think you have anything that works remaining. You just get arguments that are no better than "last Thursdayism".

------------------------------

By the way, wouldn't GHZ falsify superdeterminism too? After all, there is no subsample.

Or would one make the argument that the experimenter had no free will as to the choice of what to measure? (That seems a stretch, since all observations yield results inconsistent with local realism - at least within experimental limits).
 
  • #139
Demystifier said:
True.


Wrong. Bohmian mechanics is, by definition, a theory of nonlocal realism, so anything which assumes Bohmian mechanics eliminates local realism.


Superdeterminism by itself is not extreme at all. After all, classical mechanics is also superdeterministic. What is extreme is the idea that superdeterminism may eliminate nonlocality in QM. Namely, superdeterminism alone is not sufficient to eliminate nonlocality. Instead, to eliminate nonlocality, superdeterminism must be combined with a VERY SPECIAL CHOICE OF INITIAL CONDITIONS (see also the post of Dmitry67 above). It is such special conspiratorial initial conditions that is considered extreme.

Thank you very much for the explanations
 
  • #140
DrChinese said:
I don't think that is a completely fair to say that classical mechanics is also superdeterministic, because I do not believe such is the case. If determinism was the same thing as superdeterminism, we would not need a special name for it. So I agree completely with your "extreme" initial conditions requirement at a minimum.
I see what you mean, but note that I use a different DEFINITION of the term "superdeterminism". In my language, superdeterminism is nothing but determinism applied to everything. Thus, a classical deterministic model of the world is superdeterministic if one assumes that, according to this model, everything that exists is described by the classical laws of physics. In my language, superdeterminism does not imply the absence of specific laws, such as Newton law of gravitation.

Even with this definition of superdeterminism, it is not exactly the same as determinism. For example, if you believe that the classical laws of physics are valid everywhere except in the brain in which a genuine spiritual free will also acts on electric currents in the brain, then, according to my definition, such a view is deterministic but not superdeterministic.
 
  • #141
akhmeteli said:
I am awfully sorry, I've read your post several times, but I just cannot understand a word.
Ok, I'll try to present the gist of how I've learned to think about this in a less scattered way.

1. Bell locality can be parsed to include statistical independence between A and B.

2. Statistical dependence between A and B is sufficient to cause experimental violation of inequalities which are based on the (formal) assumption of statistical independence between A and B.

3. The statistical dependence is produced via local channels.

4. So, experimental violation of inequalities based on Bell locality doesn't imply nonlocality.

5. Formally, Bell locality entails that the joint probability of the entangled state be factorable into the product of the individual probabilities for A and B.

6. Bell locality is incompatible with the QM requirement that the entangled state representation be nonfactorable.

7. This nonfactorability or quantum nonseparability reflects the (locally produced) statistical dependencies required for the experimental production of entanglement.

8. Experimental loopholes notwithstanding, no Bell local theory can possibly reproduce the full range of QM predictions or experimental results wrt entangled states.

9. None of this implies the existence of nonlocality in Nature -- which is contrary to your idea that, in your words:
akhmeteli said:
Yes, that would certainly be a good evidence of nonlocality (I mean if violations of the genuine Bell inequalities, without loopholes, are demonstrated experimentally).


10. None of this implies that SQM (associated with Bell's theorem) is a nonlocal theory -- which is contrary to your idea that, in your words:
akhmeteli said:
To get such nonlocality in the Bell theorem you need something extra - such as the projection postulate. And this postulate generates nonlocality in a very direct way: indeed, according to this postulate, as soon as you measure a projection of spin of one particle of a singlet, the value of the projection of spin of the other particle immediately becomes determined, no matter how far from each other the particles are, and this is what the Bell theorem is about..


11. In fact, the standard QM methodology and account (including the projection postulate and any quantum level models associated with a particular experimental setup) is based on the (at least tacit, but explicit in the case of some models) assumption that there's a locally produced relationship between quantum disturbances analyzed at spacelike separations. (eg., in the case of Aspect et al experiments using atomic calcium cascades to produce entangled photons, the entangling relationship is assumed to be produced at emission -- and the experimental design must entail statistical dependence between A and B in order to pair photons emitted by the same atom).
 
  • #142
ThomasT said:
Ok, I'll try to present the gist of how I've learned to think about this in a less scattered way.

Thank you very much for your patience with me. At least now I don't feel as if I were trying to decipher a text in double-Dutch:-)

ThomasT said:
3. The statistical dependence is produced via local channels.

What local channels, if there is enough spatial separation?

ThomasT said:
8. Experimental loopholes notwithstanding, no Bell local theory can possibly reproduce the full range of QM predictions or experimental results wrt entangled states.

Again, the fact that local theories cannot reproduce all QM predictions (which include contradictions) cannot be used as an argument against local theories - it's their strong point.




ThomasT said:
11. In fact, the standard QM methodology and account (including the projection postulate and any quantum level models associated with a particular experimental setup) is based on the (at least tacit, but explicit in the case of some models) assumption that there's a locally produced relationship between quantum disturbances analyzed at spacelike separations. (eg., in the case of Aspect et al experiments using atomic calcium cascades to produce entangled photons, the entangling relationship is assumed to be produced at emission -- and the experimental design must entail statistical dependence between A and B in order to pair photons emitted by the same atom).

"the entangling relationship assumed to be produced at emission" is one thing, but the choice of projection of spin or polarization measured at A seems to immediately change the situation at B. If it were indeed so, that would be a problem for locality. At least that's what I tend to think.
 
  • #143
ThomasT said:
9. None of this implies the existence of nonlocality in Nature ...


11. In fact, the standard QM methodology and account (including the projection postulate and any quantum level models associated with a particular experimental setup) is based on the (at least tacit, but explicit in the case of some models) assumption that there's a locally produced relationship between quantum disturbances analyzed at spacelike separations. (eg., in the case of Aspect et al experiments using atomic calcium cascades to produce entangled photons, the entangling relationship is assumed to be produced at emission -- and the experimental design must entail statistical dependence between A and B in order to pair photons emitted by the same atom).

A comment just to make sure everyone is up on some of the refinements to the original Bell test regimen.

We now have the ability to entangle photons that have never met - this is called "entanglement swapping" (ES). Early versions of this protocol did not effectively allow the photons to be created sufficiently far enough to eliminate local interaction, but the newer ones do. For example:

High-fidelity entanglement swapping with fully independent sources
(2009) Rainer Kaltenbaek, Robert Prevedel, Markus Aspelmeyer, Anton Zeilinger

"Entanglement swapping allows to establish entanglement between independent particles that never interacted nor share any common past. This feature makes it an integral constituent of quantum repeaters. Here, we demonstrate entanglement swapping with time-synchronized independent sources with a fi delity high enough to violate a Clauser-Horne-Shimony-Holt inequality by more than four standard deviations. The fact that both entangled pairs are created by fully independent, only electronically connected sources ensures that this technique is suitable for future long-distance quantum communication experiments as well as for novel tests on the foundations of quantum physics."

Note that the experiment in this paper does not actually execute the variation where the photons are never in each other's light cones, but you can be sure that is coming (if not already published).

So basically, you have a pretty difficult time explaining the violation of a Bell Inequality by photon pairs that were never in a common light cone. Without having something being non-local, that is.
It is hard to imagine how you can have
 
  • #144
akhmeteli said:
What local channels, if there is enough spatial separation?
Statistical dependence refers to the fact that a detection at A changes the sample space at B, and vice versa.

This happens during the pairing process via the coincidence circuitry.

All very local, but sufficient to render Bell locality incompatible with QM and entanglement experiments.

akhmeteli said:
Again, the fact that local theories cannot reproduce all QM predictions (which include contradictions) cannot be used as an argument against local theories - it's their strong point.
But QM predictions agree with experimental results, and Bell local theories don't. More importantly, Bell local theories can't possibly agree with experimental results ... ever -- because Bell's formal expression of locality encodes statistical as well as causal independence.

Bell locality contradicts an integral part of entanglement experiments, statistical dependence between A and B. The upside, for LHV advocates, is that this doesn't rule out local realist theories -- just Bell local theories. The downside, for nonlocality advocates, is that this tells us nothing about nonlocality wrt either Nature or standard QM.

akhmeteli said:
... the choice of projection of spin or polarization measured at A seems to immediately change the situation at B. If it were indeed so, that would be a problem for locality.
Yes, that would be a problem for locality. But that's not what standard QM says, and that's not what happens experimentally.
 
  • #145
ThomasT said:
Statistical dependence refers to the fact that a detection at A changes the sample space at B, and vice versa.

This happens during the pairing process via the coincidence circuitry.

All very local, but sufficient to render Bell locality incompatible with QM and entanglement experiments.

But QM predictions agree with experimental results, and Bell local theories don't. More importantly, Bell local theories can't possibly agree with experimental results ... ever -- because Bell's formal expression of locality encodes statistical as well as causal independence.

Bell locality contradicts an integral part of entanglement experiments, statistical dependence between A and B. The upside, for LHV advocates, is that this doesn't rule out local realist theories -- just Bell local theories. The downside, for nonlocality advocates, is that this tells us nothing about nonlocality wrt either Nature or QM.

Yes, that would be a problem for locality. But that's not what QM says, and that's not what happens experimentally.

Sorry, ThomasT, you've lost me again. This time I cannot say I don't understand a word, but 30% is too little for a meaningful discussion - this is a physics forum, not a crossword contest. With all due respect, if you believe you're saying something well-known that I don't know, give me a reference, if not, try to be clearer. And I mean much clearer.
 
  • #146
akhmeteli said:
"the entangling relationship assumed to be produced at emission" is one thing, but the choice of projection of spin or polarization measured at A seems to immediately change the situation at B. If it were indeed so, that would be a problem for locality. At least that's what I tend to think.



long time ago...​

...Quantum mechanics says that there should be a high correlation between results at the polarizers because the photons instantaneously "decide" together which polarization to assume at the moment of measurement, even though they are separated in space. Hidden variables, however, says that such instantaneous decisions are not necessary, because the same strong correlation could be achieved if the photons were somehow informed of the orientation of the polarizers beforehand......

...Quantum mechanics predicts that “non-local” correlations can exist between the particles. This means that if one photon is polarized in, say, the vertical direction, the other will always be polarized in the horizontal direction, no matter how far away it is. However, some physicists argue that this cannot be true and that quantum particles must have local values – known as “hidden variables” – that we cannot measure......




.
 
Last edited:
  • #147
ThomasT said:
But QM predictions agree with experimental results, and Bell local theories don't. More importantly, Bell local theories can't possibly agree with experimental results ... ever -- because Bell's formal expression of locality encodes statistical as well as causal independence.

...

Yes, that would be a problem for locality. But that's not what standard QM says, and that's not what happens experimentally.

These are not standard expressions of theory or experiment. Experimentally: when Alice acts, it appears "as if" the situation changes non-locally for Bob (and vice versa). Theoretically: A Bell local theory is one in which Alice action does not appear "as if" the situation changes at Bob to match UNLESS there is a sub-c channel for propagation (or possibly a common earlier cause within a mutual light cone).
 
  • #148
akhmeteli said:
Sorry, ThomasT, you've lost me again. This time I cannot say I don't understand a word, but 30% is too little for a meaningful discussion - this is a physics forum, not a crossword contest. With all due respect, if you believe you're saying something well-known that I don't know, give me a reference, if not, try to be clearer. And I mean much clearer.
I don't know if it's a well known approach or not.

The argument is that Bell's locality condition isn't, exclusively, a locality condition. If it isn't, then what might this entail wrt the interpretation of experimental violations of inequalities based on Bell locality?

In a nutshell:

Bell locality doesn't just represent causal independence between A and B, but also statistical independence between A and B.

Statistical dependence between A and B means that a detection at A changes the sample space at B, and vice versa. The pairing process entails statistical dependence between A and B, and this statistical dependence can be accounted for via the local transmissions and interactions of the coincidence circuitry.

Statistical dependence between A and B is sufficient to violate inequalities based on Bell locality.

So, experimental violations of inequalities based on Bell locality, while they do rule out Bell local theories, don't imply nonlocality or necessarily rule out local realism.
 
Last edited:
  • #149
DrChinese said:
These are not standard expressions of theory or experiment.
If not, they should be.

DrChinese said:
Experimentally: when Alice acts, it appears "as if" the situation changes non-locally for Bob (and vice versa).
This isn't the way that I've learned to think about it.

DrChinese said:
Theoretically: A Bell local theory is one in which Alice action does not appear "as if" the situation changes at Bob to match UNLESS there is a sub-c channel for propagation (or possibly a common earlier cause within a mutual light cone).
I'm not sure what you're saying. Bell local theories of entangled states don't match QM or experiments, do they?
 
  • #150
ThomasT said:
I'm not sure what you're saying. Bell local theories of entangled states don't match QM or experiments, do they?

Bell local + Bell realistic = ruled out.
 
Back
Top