Local realism ruled out? (was: Photon entanglement and )

Click For Summary
The discussion revolves around the validity of local realism in light of quantum mechanics and Bell's theorem. Participants argue that existing experiments have not conclusively ruled out local realism due to various loopholes, such as the detection and locality loopholes. The Bell theorem is debated, with some asserting it demonstrates incompatibility between quantum mechanics and local hidden variable theories, while others claim it does not definitively negate local realism. References to peer-reviewed papers are made to support claims, but there is contention over the interpretation of these findings. Overall, the conversation highlights ongoing disagreements in the physics community regarding the implications of quantum entanglement and the measurement problem on local realism.
  • #781
Ilja said:
[..] I didn't even search, the point is that it is obvious. [..]
The contrary is for me obvious, because I deem Newton as having been sound of mind, based on what he did express; and since neither of us has proof of what Newton really thought on this matter, we only have our personal estimations about his thinking on this. :wink:
[De Broglie's theory] contains [the "implausible infinite and unfailing working range of QM, independent of distance"]. As in Newtonian theory, the speed of a particle depends on the positions of all other particles of the universe.
I'm afraid that you did not understand my question which is not about speed. The usual discussions are only about half of the "spookiness". How does De Broglie's theory explain that an action on a particle at one end of the universe can have an undiminished effect on another particle at the other end of the universe? What physical mechanism did he propose for that? Note that if it requires a long answer, I'll start it as a new topic. :-p
[..] I would suggest to name this the "Orwellian interpretation" - changing the language so that one can no longer talk about reality :smile: [..]
I just (finally) read 1984 - and it is even more applicable on some of these discussions than had imagined before I read it. Indeed, there is too much Newspeak going on.
 
Physics news on Phys.org
  • #782
akhmeteli said:
I am not sure standard quantum theory truly predicts violations, as to “predict” them, it uses its mutually contradicting components – unitary evolution and the theory of measurement (e.g., the projection postulate). That’s not what I call “prediction".
I'm sure, because I use the dBB interpretation, and in the dBB interpretation there is no such contradiction.

The collapse of the wave function in dBB is described by the unitary evolution of the wave function of the object itself together with the apparatus, and the evolution of the object and the apparatus themself (by the guiding equation). One can combine the full wave function ψfull(o,a,t) with the trajectory of the apparatus a(t) to define an effective wave function of the object ψo(o,t) = ψfull(o,a(t),t). The evolution equation for this effective wave function is, during the measurement, not unitary, because unitary interaction holds only for closed systems or systems which at least are not interacting with their environment. Before and after the measurement, that means if there is no longer any interaction of o with something else, it is unitary. This easily follows from the unitary evolution for the full system.

One can always find some sociological explanations, but we are still left with the fact that the majority does not think dBB is as “nice” as you think. Your conclusion seems to be that we need a better majority, I suspect we need a better dBB as well.
There is room for improvement for the presentation of dBB - it is quite typical to use many particles, while, in the light of QFT, it would be much more reasonable to use a general configuration space, which can be, as well, a field.

But the main reason for not liking dBB is obvious - it is the strong belief into fundamental relativity. And here improvements are impossible - any realistic interpretation of QM has to violate fundamental relativity.

This is not a problem of physics - effective relativity is not a problem at all for dBB, the first model for the EM field was part of the first paper by Bohm. It is a problem of philosophy - the belief into fundamental relativity, or the spacetime interpretation, in comparison with effective relativity, which is compatible with the Lorentz ether.
 
  • #783
Ilja said:
I'm sure, because I use the dBB interpretation, and in the dBB interpretation there is no such contradiction.

The collapse of the wave function in dBB is described by the unitary evolution of the wave function of the object itself together with the apparatus, and the evolution of the object and the apparatus themself (by the guiding equation). One can combine the full wave function ψfull(o,a,t) with the trajectory of the apparatus a(t) to define an effective wave function of the object ψo(o,t) = ψfull(o,a(t),t). The evolution equation for this effective wave function is, during the measurement, not unitary, because unitary interaction holds only for closed systems or systems which at least are not interacting with their environment. Before and after the measurement, that means if there is no longer any interaction of o with something else, it is unitary. This easily follows from the unitary evolution for the full system.

As far as I know, it is impossible to prove violations in dBB without using some assumptions beyond unitary evolution, otherwise such a proof could be transferred to standard quantum theory. If you disagree, could you please give a reference to such a proof?


Ilja said:
There is room for improvement for the presentation of dBB - it is quite typical to use many particles, while, in the light of QFT, it would be much more reasonable to use a general configuration space, which can be, as well, a field.

But the main reason for not liking dBB is obvious - it is the strong belief into fundamental relativity. And here improvements are impossible - any realistic interpretation of QM has to violate fundamental relativity.

This is not a problem of physics - effective relativity is not a problem at all for dBB, the first model for the EM field was part of the first paper by Bohm. It is a problem of philosophy - the belief into fundamental relativity, or the spacetime interpretation, in comparison with effective relativity, which is compatible with the Lorentz ether.

I gave my reasons to think that fundamental relativity has not been ruled out - absence of loophole-free demonstrations of violations and absence of contradiction-free proof of violations in quantum theory.
 
  • #784
harrylin said:
The contrary is for me obvious, because I deem Newton as having been sound of mind, based on what he did express; and since neither of us has proof of what Newton really thought on this matter, we only have our personal estimations about his thinking on this. :wink:
My point was not about Newton's thinking, but about the equations. The link http://plato.stanford.edu/entries/Newton-philosophy/#ActDis has been already postet here and shows that Newton was aware that there is an action at a distance in the equations, and has considered the lack of mediation as a problem.

I'm afraid that you did not understand my question which is not about speed. The usual discussions are only about half of the "spookiness". How does De Broglie's theory explain that an action on a particle at one end of the universe can have an undiminished effect on another particle at the other end of the universe? What physical mechanism did he propose for that? Note that if it requires a long answer, I'll start it as a new topic. :-p
dBB does not give any answer, and does not even try to give one. So the situation is quite similar to Newtonian gravity, where the formulas do not tell us anything about an explanation for gravity.

And, similarly, I think this is an interesting open problem and can be a hint for developing some subquantum theories. A theory which, for example, restricts the maximum speed of this spooky action should violate quantum theory.

This would be a second hint for subquantum theory, the first being that QM fails for very small values of ψ. That's because around ψ(q)=0 the dBB velocity becomes infinite, even if only in a quite harmless way (increasingly fast rotation around the 0).
 
  • #785
Ilja said:
And, similarly, I think this is an interesting open problem and can be a hint for developing some subquantum theories. A theory which, for example, restricts the maximum speed of this spooky action should violate quantum theory.
Not sure if Gisin's experiment was posted in this thread but his group suggested that the speed of this non-local connection must be is at least 10,000 times the speed of light:
For instance, if such a privileged reference frame exists and is such that the Earth's speed in this frame is less than 10-3 that of the speed of light, then the speed of this spooky influence would have to exceed that of light by at least 4 orders of magnitude.
Testing spooky action at a distance
http://arxiv.org/pdf/0808.3316v1.pdf
 
Last edited:
  • #786
bohm2 said:
Not sure if Gisin's experiment was posted in this thread but his group suggested that the speed of this non-local connection must be is at least 10,000 times the speed of light:

Testing spooky action at a distance
http://arxiv.org/pdf/0808.3316v1.pdf
Nice find! I'll read it. :smile:
Note that your Arxiv link is a version of a reviewed publication:
http://www.nature.com/nature/journal/v454/n7206/full/nature07121.html

PS. a quick question: I quickly looked over it but I could not immediately "get" the idea behind it.
What is in a nutshell their method for determining the minimal speed of "spooky action at a distance? They mention two-photon interference, which sounds somewhat like MMX (even more like KTX). Where is "Bell" in all that? :confused:
 
Last edited by a moderator:
  • #787
bohm2 said:
Not sure if Gisin's experiment was posted in this thread but his group suggested that the speed of this non-local connection must be is at least 10,000 times the speed of light:

Testing spooky action at a distance
http://arxiv.org/pdf/0808.3316v1.pdf

Let me just note that the article in question does not claim simultaneous elimination of both the detection and the locality loopholes (probably, the detection loophole still exists, as is customary for all experiments with photons), so, strictly speaking, their experiment does not even demonstrate violations of the Bell inequalities.
 
  • #788
harrylin said:
Nice find! I'll read it. :smile:
Note that your Arxiv link is a version of a reviewed publication:
http://www.nature.com/nature/journal/v454/n7206/full/nature07121.html

PS. a quick question: I quickly looked over it but I could not immediately "get" the idea behind it.
What is in a nutshell their method for determining the minimal speed of "spooky action at a distance? They mention two-photon interference, which sounds somewhat like MMX (even more like KTX). Where is "Bell" in all that? :confused:

The idea is that if there is another, greater limiting speed, say 100c, then there has to be a corresponding superlight cone and there will be space-like separated event pairs for this superlight cone too. And for such event pairs the Bell inequalities should hold. So one has to test the violation of Bell inequalities for large enough sets of event pairs so that there will be no place for the 100 c superlight cone.

The next idea was that there is a reasonable hypothesis for the place of the superlight cone - one can guess that the time of the rest frame for the background radiation will be time-like in the superlight cone too. So there is no need to rule out all those skew superlight cones, and all one needs is to care about the much smaller set of superlight cones compatible with the background radiation rest frame. So, one only has to look for event pairs which have approximately equal time in the CMBR frame.
 
Last edited by a moderator:
  • #789
Ilja said:
The idea is that if there is another, greater limiting speed, say 100c, then there has to be a corresponding superlight cone and there will be space-like separated event pairs for this superlight cone too. And for such event pairs the Bell inequalities should hold. So one has to test the violation of Bell inequalities for large enough sets of event pairs so that there will be no place for the 100 c superlight cone.

The next idea was that there is a reasonable hypothesis for the place of the superlight cone - one can guess that the time of the rest frame for the background radiation will be time-like in the superlight cone too. [..]
Ok, thanks - it's starting to dawn on me now. :smile:
I have the impression that in that last paper they tried to be so general as not to need the CMBR hypothesis.

Anyway, I now find my hunch that such an interferometer experiment is unlikely to yield anything "spooky" supported by an old discussion on this forum (keyword I searched with was "Franson"):

https://www.physicsforums.com/showthread.php?t=229393

However, there is for me not enough explanation there ...
 
Last edited:
  • #790
This has been posted here before but was just published in Nature Physics. It argues that barring loopholes, if the non-local effects observed in Bell-type experiments propagate at any finite speed, then non-locality could be exploited for superluminal communication:
The new hidden influence inequality shows that the get-out won't work when it comes to quantum predictions. To derive their inequality, which sets up a measurement of entanglement between four particles, the researchers considered what behaviours are possible for four particles that are connected by influences that stay hidden and that travel at some arbitrary finite speed. Mathematically (and mind-bogglingly), these constraints define an 80-dimensional object. The testable hidden influence inequality is the boundary of the shadow this 80-dimensional shape casts in 44 dimensions. The researchers showed that quantum predictions can lie outside this boundary, which means they are going against one of the assumptions. Outside the boundary, either the influences can't stay hidden, or they must have infinite speed.
Looking Beyond Space and Time to Cope With Quantum Theory
http://www.sciencedaily.com/releases/2012/10/121028142217.htm

Quantum non-locality based on finite-speed causal influences leads to superluminal signalling
http://www.nature.com/nphys/journal/vaop/ncurrent/full/nphys2460.html

Full article posted in arxiv:
http://arxiv.org/pdf/1110.3795v1.pdf
 
Last edited by a moderator:
  • #791
I know this isn't likely to sway any opinions but for completion and future reference this is a follow-up piece by Gisin to the J. D. Bancal et al. Nature Physics paper linked above that was just posted on arxiv:
We investigate possible explanations of quantum correlations that satisfy the principle of continuity, which states that everything propagates gradually and continuously through space and time. In particular, following [J.D. Bancal et al, Nature Physics 2012], we show that any combination of local common causes and direct causes satisfying this principle, i.e. propagating at any finite speed, leads to signalling. This is true even if the common and direct causes are allowed to propagate at a supraluminal-but-finite speed defined in a Newtonian-like privileged universal reference frame. Consequently, either there is supraluminal communication or the conclusion that Nature is nonlocal (i.e. discontinuous) is unavoidable.
Quantum correlations in Newtonian space and time: arbitrarily fast communication or nonlocality
http://lanl.arxiv.org/pdf/1210.7308.pdf
 
Last edited:
  • #792
Lecture from lead author JD Bancal from Perimeter Institute:
The experimental violation of Bell inequalities using spacelike separated measurements precludes the explanation of quantum correlations through causal influences propagating at subluminal speed. Yet, it is always possible, in principle, to explain such experimental violations through models based on hidden influences propagating at a finite speed v>c, provided v is large enough. Here, we show that for any finite speed v>c, such models predict correlations that can be exploited for faster-than-light communication. This superluminal communication does not require access to any hidden physical quantities, but only the manipulation of measurement devices at the level of our present-day description of quantum experiments. Hence, assuming the impossibility of using quantum non-locality for superluminal communication, we exclude any possible explanation of quantum correlations in term of finite-speed influences.
http://pirsa.org/displayFlash.php?id=11110145
 
  • #793
  • #794
ZapperZ said:
The point here is that this thread appears to indicate that even IF all the loopholes are closed (and I will make MY prediction here that in the near future, say within 3 years, ALL the loopholes will be closed in one single experiment), the intrinsic nature of the theory will STILL not falsify local realism.

Three years have passed. As far as I know, all the loopholes have not been closed in one single experiment. For example, reporting some further progress in a recent article http://arxiv.org/abs/1212.0533 , Zeilinger e.a. still admit that “The realization of an experiment that is free of all three assumptions – a so-called loophole-free Bell test – remains an important outstanding goal for the physics community”.

I am writing this without any Schadenfreude. I do appreciate that a loophole-free experiment can be performed any moment now. It looks like the race to conduct the first experiment of this kind is really fierce. E.g., the following quote is interesting (SCIENCE, VOL 331, P. 1380 (2011)): “Zukowski thinks the race to close all the loopholes simultaneously will soon be over. “Conservatively, it could take another 5 years to complete, but it could also be done tomorrow,” he says. “We’re at the stage where everyone is scared to read their competitors’ papers, in case they find they have been beaten. The only real question is: Who will win?””

I also had this impression of a fierce race listening to talks on quantum foundations experiments at several conferences last year. On the other hand, some experimentalists admitted (typically, not in their official talks:-) ) that they encounter some formidable challenges.

So I am just trying to say that these three years since the start of this thread have demonstrated again that it is extremely difficult to demonstrate violations of the genuine Bell inequalities. Will they be demonstrated by the fiftieth anniversary of the Bell’s article next year? Or ever? My prediction is “no”. But I may be mistaken.
 
  • #795
Gordon Watson said:
The point is that particles in a singlet state have, both theoretically and experimentally, a higher correlation than you seem to allow (or expect) in your work.

I guess you're just bashing your response out, so this is not intended to be a substantive criticism, but it's not a correlation that's higher, it's a sum of absolute values of a sum and a difference between four different correlations $|A-B|+|C+D|$ (at least, that's what it is in the CHSH formulation, in no case is the difficulty for local realism that a correlation coefficient simpliciter is too high).
 
  • #796
Gordon Watson said:
The point is that particles in a singlet state deliver, both theoretically and experimentally, a higher expectation value* than you seem to allow (or expect) in your work.

Dear Gordon Watson,

I tried to explain in my post 753 in this thread why I cannot agree with you.
 
  • #797
Gordon Watson said:
OK; disagreeing with me is no big deal. BUT I'm NOT aware of any rational quantum physicist that agrees with you:

"... that there are some reasons to believe Bell inequalities cannot be violated either in experiments or in quantum theory." [Cited above.]​

So Santos, Marshall, nightlight are not rational quantum physicists, in your book. They are in mine. Let me add that I gave those "reasons" and properly published them (although I just repeated other people's arguments, as far as those "reasons" are concerned).

Another thing: "the foundations of quantum mechanics ... remain hotly debated in the scientific community, and no consensus on essential questions has been reached." (Schlosshauer, Kofler, Zeilinger, http://arxiv.org/abs/1301.1069 ). So disagreement is a "new normal" for quantum foundations.
 
  • #798
Gordon Watson said:
Using your terms to conclude re my position (vis-a-vis yours), I am satisfied that:

1. Bell inequalities are repeatedly violated by experiment.

2. Bell inequalities are certainly violated by quantum theory.

3. Except for their motivation toward better experiments, the remaining loopholes are of no consequence.
..
Agree. 1 and 2 are facts, and 3 seems to me to be a much more reasonable position than that taken by the loophole people.
 
  • #799
Gordon Watson said:
Using your terms to conclude re my position (vis-a-vis yours), I am satisfied that:

You are satisfied, I'm not.

Gordon Watson said:
1. Bell inequalities are repeatedly violated by experiment.

Not unless you ignore the loopholes.

Gordon Watson said:
2. Bell inequalities are certainly violated by quantum theory.

Not unless you use as assumptions mutually contradictory postulates of standard quantum theory, such as unitary evolution and the projection postulate. However, if you use mutually contradictory assumptions, you can get any conclusion, however absurd.

Gordon Watson said:
3. Except for their motivation toward better experiments, the remaining loopholes are of no consequence.

This is an opinion, not a fact.
 
  • #800
nanosiborg said:
[..] position [..] taken by the loophole people.
Who are such "loophole people"? It is suggestive of people who stick to an opinion against all odds, and I would be surprised if anyone here identifies with such a position - in which case it's just a strawman (it's a derogative term, used to indicate a means of escape or evasion).
 
Last edited:
  • #801
harrylin said:
Who are such "loophole people"?
People who think that a loophole-free test will change the current situation, which is that qm predictions are in line with results and violate BI, and lhv predictions are not in line with results.
 
  • #802
nanosiborg said:
People who think that a loophole-free test will change the current situation, which is that qm predictions are in line with results and violate BI, and lhv predictions are not in line with results.

Whether I am one of those "loophole people" or not, I respectfully disagree with your assessment of the current situation, e.g., with the following phrase: "lhv predictions are not in line with results." As long as there are loopholes in experiments (and there have been no loophole-free experiments so far), the results of the experiments cannot rule out all lhv theories, so at least some lhv theories' predictions are in line with the results.
 
  • #803
akhmeteli said:
As long as there are loopholes in experiments (and there have been no loophole-free experiments so far), the results of the experiments cannot rule out all lhv theories ...
Strictly speaking, this is correct. But I think the evidence is overwhelming that if a loophole-free test were done, then qm would correctly predict the results and lhv would not.

akhmeteli said:
... so at least some lhv theories' predictions are in line with the results.
Assumptions are required because of the inability to close all loopholes in the same test. So far, given the (reasonable, imo) assumptions used by the testers, qm agrees with experiment and lhv doesn't.

The incompatibility between qm and lhv has been mathematically proven. They necessarily predict a different correlation between θ and rate of coincidental detection. So, if qm is correct, then (Bell) lhv models of quantum entanglement are ruled out.

I'm betting that qm will continue to be confirmed, even in a loophole-free test.
 
  • #804
nanosiborg said:
Strictly speaking, this is correct. But I think the evidence is overwhelming that if a loophole-free test were done, then qm would correctly predict the results and lhv would not.

This is an opinion, not a fact. Somebody believes the evidence is overwhelming, somebody believes there is no evidence, as there have been no loophole-free experiments. How do you like the following "overwhelming evidence" that planar Euclidean geometry (PEG) is wrong (I already offered it in this thread)? PEG predicts that the sum of angles of any triangle is 180 degrees, whereas experiments demonstrate with high confidence that the sum of angles of a quadrangle and the sum of angles of a triangle on a sphere are not equal to 180 degrees. The obvious "loopholes" will certainly be closed simultaneously in future experiments:-)

nanosiborg said:
Assumptions are required because of the inability to close all loopholes in the same test. So far, given the (reasonable, imo) assumptions used by the testers, qm agrees with experiment and lhv doesn't.

My question is: what assumption is more reasonable: local realism or, say, fair sampling? Apparently, you'd vote for the latter one, I would vote for the former one. So who's right? I believe so far this is just a matter of opinion.

nanosiborg said:
The incompatibility between qm and lhv has been mathematically proven. They necessarily predict a different correlation between θ and rate of coincidental detection. So, if qm is correct, then (Bell) lhv models of quantum entanglement are ruled out.

I agree, the Bell theorem proves incompatibility between standard quantum theory and local realism. I argue though that this is not a problem for local realism, as, strictly speaking, standard quantum theory is incompatible with itself (I have in mind the notorious problem of measurements in quantum theory), so, strictly speaking, it cannot be completely correct. To prove incompatibility of standard quantum theory and local realism, you need to prove that the Bell inequalities can be violated in quantum theory. To this end, you need to use two mutually contradictory postulates of standard quantum theory: unitary evolution and, say, the projection postulate.


nanosiborg said:
I'm betting that qm will continue to be confirmed, even in a loophole-free test.

Strictly speaking, this phrase just attests to the strength of your opinion, not to its correctness. I don't believe local realism will be ruled out in loophole-free experiments, but again, this is just my opinion, not a fact. Maybe we should just wait and see.
 
  • #805
Gordon Watson said:
Andy, While we're waiting, please, would you mind spelling out what you mean by "local realism"?

Accepting that by "local" you mean "Einstein-local", maybe we could just focus on what "realism" means to you, please?

Locality, in my book, means that no effect can have its cause anywhere beyond its past light-cone. I guess this is what you call "Einstein-local".

Realism, in my book, is not the realism of the EPR article, i.e. I don't believe that, say, a particle has definite values of coordinates, momentum, spin projections, etc., whether the relevant observables are measured or not. You may say that I do not seek noncontextual hidden variables. The version of realism that I accept is contextual: any state can be described by some set of parameters that uniquely define the state's evolution. I would not call those parameters "hidden variables", as, say, in the models of my articles, they are not exactly hidden: they are the potentials of the electromagnetic fields and their derivatives. This version is "contextual" as the set of parameters must describe the relevant instruments as well.
 
  • #806
akhmeteli said:
My question is: what assumption is more reasonable: local realism or, say, fair sampling? Apparently, you'd vote for the latter one, I would vote for the former one. So who's right? I believe so far this is just a matter of opinion.
Yes, I'd vote for the latter one. We could argue about the merits of our apparently different processing of certain articles, but I prefer to just wait for a loophole-free test.

What do you think is the likelihood of a loophole-free test in the foreseeable future?

akhmeteli said:
I agree, the Bell theorem proves incompatibility between standard quantum theory and local realism. I argue though that this is not a problem for local realism, as, strictly speaking, standard quantum theory is incompatible with itself (I have in mind the notorious problem of measurements in quantum theory), so, strictly speaking, it cannot be completely correct. To prove incompatibility of standard quantum theory and local realism, you need to prove that the Bell inequalities can be violated in quantum theory. To this end, you need to use two mutually contradictory postulates of standard quantum theory: unitary evolution and, say, the projection postulate.
There's no measurement problem of the sort you mention (ie., qm being incompatible with itself due to contradictory dynamical laws or postulates) with a minimalist statistical interpretation. So, in the minimalist view, if a loophole-free test affirms qm, then local realism (at least in the form of Bell lhv models) will be definitively ruled out.
 
  • #807
nanosiborg said:
Strictly speaking, this is correct. But I think the evidence is overwhelming that if a loophole-free test were done, then qm would correctly predict the results and lhv would not.[..]
[..]
What do you think is the likelihood of a loophole-free test in the foreseeable future?
I wonder if such a conclusive test will be possible; the failure to accomplish that feat in the course of decades suggests to me that it may be a law of nature that such a test is not possible (similar to the relativity and uncertainty principles).
 
  • #808
nanosiborg said:
What do you think is the likelihood of a loophole-free test in the foreseeable future?

I don't know. Just don't have enough information. Some knowledgeable people believe such a test is imminent, they say something like "in a year or two". I won't be surprised though if such a test will take much, much more time. Whenever it happens though, I don't expect any violations in a loophole-free test.

nanosiborg said:
There's no measurement problem of the sort you mention (ie., qm being incompatible with itself due to contradictory dynamical laws or postulates) with a minimalist statistical interpretation.

I did not consider the minimalist statistical interpretation, just standard quantum theory. However, based on the discussion of some other interpretations (such as Bohmian one) in this thread, I tend to think that if there are no contradictions in an interpretation, it is either impossible to prove that the Bell inequalities can be violated, or predictions of the interpretation differ from those of standard quantum theory, making dubious the experimental status of the interpretation.

nanosiborg said:
So, in the minimalist view, if a loophole-free test affirms qm, then local realism (at least in the form of Bell lhv models) will be definitively ruled out.

Irrespective of any interpretation, I agree that loophole-free experimental demonstration of violations would make a local realist's life much more difficult, although "definitively" would be a strong word even then - e.g., there would still be a possibility of superdeterminism.
 
Last edited:
  • #809
akhmeteli said:
I don't know. Just don't have enough information. Some knowledgeable people believe such a test is imminent, they say something like "in a year or two". I won't be surprised though if such a test will take much, much more time. Whenever it happens though, I don't expect any violations in a loophole-free test.
Whatever the results it will be exciting when (if) it happens.

akhmeteli said:
I did not consider the minimalist statistical interpretation, just standard quantum theory.
I was thinking of the minimalist statistical interpretation as being standard quantum theory.

akhmeteli said:
Irrespective of any interpretation, I agree that loophole-free experimental demonstration of violations would make a local realist's life much more difficult, although "definitively" would be a strong word even then - e.g., there would still be a possibility of superdeterminism.
I consider superdeterminism (a metaphysical conspiracy theory) to be an unacceptable stretch anyway. Given a loophole-free test that confirms qm and falsifies lhv I don't see superdeterminism being taken seriously by anybody. I mean, local realists will have to admit, if that happens, that their program has been definitively refuted and Bell lhv models of quantum entanglement are definitively ruled out.
 
Last edited:
  • #810
nanosiborg said:
I was thinking of the minimalist statistical interpretation as being standard quantum theory.

If this interpretation adopts both unitary evolution (UE) and the projection postulate (PP) of standard quantum theory, it also adopts its contradictions. If you believe you have a solution to the problem of measurements in standard quantum theory... Well, congratulations... Good luck "selling" your solution to physics community... If, however, this interpretation does not adopt UE and PP, it's not standard quantum theory. Moreover, it would be difficult, if not impossible, to prove that there can be violations in this interpretation.

nanosiborg said:
I consider superdeterminism (a metaphysical conspiracy theory) to be an unacceptable stretch anyway. Given a loophole-free test that confirms qm and falsifies lhv I don't see superdeterminism being taken seriously by anybody. I mean, local realists will have to admit, if that happens, that their program has been definitively refuted and Bell lhv models of quantum entanglement are definitively ruled out.

I agree that superdeterminism does not look good. However, I don't know how to refute 't Hooft's reasoning in favor of superdeterminism (by the way, 't Hooft is not "anybody"): "if you believe in determinism, you have to believe it all the way." (http://arxiv.org/abs/1112.1811)
 

Similar threads

Replies
4
Views
1K
Replies
58
Views
4K
Replies
6
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 41 ·
2
Replies
41
Views
8K
  • · Replies 7 ·
Replies
7
Views
3K
Replies
2
Views
2K
Replies
63
Views
8K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K