A Bell Theorem with no locality assumption?

  • #51
Is the detection loophole the fact that the experimental correlation function does not give -1 at angle 0, and a value of CHSH which is 2.38 and not 2.82 ?

This can be seen on the graph at the end of this paper :

http://arxiv.org/PS_cache/quant-ph/pdf/9806/9806043v1.pdf ?
 
Physics news on Phys.org
  • #52
If you want can we say in the formula used :
<AB> counts the matching pairs,
whereas <A><B> counts the non detected as pair, hence the loophole, so that the experimental result should give : <AB>-<A><B> if we take into account the detection loophole ?
 
  • #53
Detection loophole is the fact that interpretation of Bell tests that use photons should relay on fair sampling assumption. This assumption means that correlations in photon pairs where one of two is not detected would be the same (if they would be detected) as for the pairs where both photons were detected.
If that is not so then correlations can be affected by detection of different subsamples under different settings of analyzer.
 
  • #54
Anton Zeilinger said:
http://discovermagazine.com/2011/jul-aug/14-anton-zeilinger-teleports-photons-taught-the-dalai-lama/article_view

So does that mean Einstein was wrong?

There are still some technical loopholes in the experiments testing Bell’s theorem that could allow for a local realistic explanation of entanglement. For instance, we don’t detect all the particles in an experiment, and therefore it is conceivable that, were we to detect every single particle, some would not be in agreement with quantum mechanics. There is *a very remote chance* that nature is really vicious and that it allows us to detect only particles that agree with quantum mechanics. If so, and if we could ever detect the others, then local realism could be saved. But I think we are close to closing all of these loopholes, which would be a significant achievement with practical implications for quantum technologies.

...[/color]
 
  • #55
April 13, 2010:
http://www.nist.gov/pml/div686/detector_041310.cfm"
So we are waiting ...
 
Last edited by a moderator:
  • #56
... for that 1% to turn everything upside down ... :biggrin:
 
  • #57
DevilsAvocado said:
... for that 1% to turn everything upside down ... :biggrin:
Who is talking about 1%?
I am talking about 90% turning it into something slightly more classical.

Didn't you know that there are no reports about experiments that would aim for increased coincidence rates?
 
  • #58
zonde said:
Who is talking about 1%?

There must be something wrong with my calculator because I can’t even get the basic math right...
NIST Detector Counts Photons With 99 Percent Efficiency
...
Who is talking about 1%?
...
I am talking about 90%

So, are you saying (for real) that there is 90% chance for Local Reality to survive? :bugeye::bugeye::bugeye:
 
  • #59
There is no problem with math.
The fact that we have detectors with 99% detection efficiency does not automatically solve question about detection loophole free photon Bell test.
Bell test still has to be performed using these detectors and should give high coincidence count rate while violating Bell inequalities by significant amount at the same time.

For example you can take a look at this experiment:
http://arxiv.org/abs/quant-ph/9810003"
It says:
"After passing through adjustable irises, the light was collected using 35mm-focal length doublet lenses, and directed onto single-photon detectors — silicon avalanche photodiodes (EG&G #SPCM’s), with efficiencies of ∼ 65% and dark count rates of order 100s−1."
and:
"The collection irises for this data were both only 1.76 mm in diameter – the resulting collection efficiency (the probability of collecting one photon conditioned on collecting the other) is then ∼ 10%."

So while detector efficiency is around 65% coincidence rate was only around 10%. And it is this coincidence rate that is important if we want to speak about closing detection loophole.
 
Last edited by a moderator:
  • #60
zonde said:
... coincidence rate was only around 10%


Okay, so I’m asking you again:

DevilsAvocado said:
So, are you saying (for real) that there is 90% chance for Local Reality to survive?
:bugeye::bugeye::bugeye:
 
  • #61
Make it 100%. I have no doubt that local realism holds at least as far as quantum entanglement is concerned.
 
  • #62
zonde said:
Make it 100%. I have no doubt that local realism holds at least as far as quantum entanglement is concerned.
That would mean that the qm predictions for all optical Bell tests are incorrect.

That's hard to accept, especially since the qm predicted correlation between the angular difference of the crossed polarizers and the rate of coincidental detection is also intuitively in line with classical optics principles, whereas that of the archetypal local realistic model isn't.
 
  • #63
Hey TT! Welcome back! I’m sorry for that terrible "silly joke"... :blushing: pleeeeeeeeease tell me that this had absolutely nothing to do with your 'pause'...
 
  • #64
zonde said:
Make it 100%. I have no doubt that local realism holds at least as far as quantum entanglement is concerned.

zonde, you’re refuting the most precise theory we got, thousands of experiments, consensus in the global scientific community, the work and words by Anton Zeilinger et al., etc, etc, ...

How does it feel to be a heretic? Left all alone out in the cold?
 
  • #65
ThomasT said:
That would mean that the qm predictions for all optical Bell tests are incorrect.
No, it doesn't mean that. QM predictions are tested and they work for inefficient detection case. Experiments with efficient detection can not change that.

But it would mean that QM predictions are incorrect for the limit of single photon.


ThomasT said:
That's hard to accept, especially since the qm predicted correlation between the angular difference of the crossed polarizers and the rate of coincidental detection is also intuitively in line with classical optics principles, whereas that of the archetypal local realistic model isn't.
I am not saying that "archetypal local realistic model" used by Bell is correct. It was successfully used to make mathematical argument but it is very poor as description of physical reality.
 
  • #66
DevilsAvocado said:
zonde, you’re refuting the most precise theory we got, thousands of experiments, consensus in the global scientific community, the work and words by Anton Zeilinger et al., etc, etc, ...

How does it feel to be a heretic? Left all alone out in the cold?
I can say the same as in the replay to Thomas.
Experiments with efficient detection can not change results of experiments with inefficient detection.

And I don't see that I am alone.
Idea that QM does not apply to single particles is quite common.
 
  • #67
zonde said:
And I don't see that I am alone.

In mainstream science you are, or you have to show me a least one reputable professor working at a reputable institute, preferably with one or two awards, accepted by the community – who agrees with you that local realism has 100% chance to survive.

And while you’re at it, you could maybe also explain to me why Zeilinger, Aspect and Clauser has won the http://www.wolffund.org.il/cat.asp?id=25&cat_title=PHYSICS" (one of the most prestigious in the world) along with 100,000 Euros "for their fundamental conceptual and experimental contributions to the foundations of quantum physics, specifically an increasingly sophisticated series of tests of Bell’s inequalities or extensions there of using entangled quantum states"...?

I mean... you could hardly claim that they’re just "nice" to Zeilinger, Aspect and Clauser, right?

And why are there http://blogs.scientificamerican.com/observations/2011/09/21/annual-nobel-predictions-announced-but-forecasting-prizes-remains-a-tricky-business/" for "their tests of Bell’s inequalities and research on quantum entanglement", if there are a lot of people like you who have "found out" that this is just "mumbo-jumbo"...?
 
Last edited by a moderator:
  • #68
zonde said:
No, it doesn't mean that. QM predictions are tested and they work for inefficient detection case. Experiments with efficient detection can not change that.
Ok, but calculations assuming 100% efficient detection also indicate a measurable difference between qm predictions and LRHV predictions. So, I'm not sure what you're asserting wrt local realism.

zonde said:
But it would mean that QM predictions are incorrect for the limit of single photon.
I'm not sure what you mean by this. Afaik, wrt the way I learned what I remember of qm :rolleyes:, it doesn't have to do with single photon detections, but only with photon flux wrt large number of trials. And isn't this is also what LRHV models of entanglement are concerned with predicting?

zonde said:
I am not saying that "archetypal local realistic model" used by Bell is correct. It was successfully used to make mathematical argument but it is very poor as description of physical reality.
Ok, I think we agree on this. So exactly what are you referring to when you speak of "local realism"?
 
  • #69
ThomasT said:
Ok, but calculations assuming 100% efficient detection also indicate a measurable difference between qm predictions and LRHV predictions. So, I'm not sure what you're asserting wrt local realism.

With LR nowhere near those results, which are 100% consistent with QM.

Don't make me come and beat you up! :smile:
 
  • #70
DrChinese said:
I am opening a new thread to continue discussion of some interesting ideas around EPR and Bell. Specifically, this is about the idea of realism, and whether it is tenable in light of Bell and other HV no-go theorems. Note: I usually use Hidden Variables (HV) and Realism interchangeably although some people see these as quite different. I also tend to use Realism as being an extension of EPR's "elements of reality" as a starting point for most discussions. After all, if a physical measurement can be predicted with certainty without disturbing what is measured... well, I would call that as real as it gets.

charlylebeaugossehad thrown out a few ideas in another thread - especially around some papers by Charles Tresser. So I suggest we discuss around these:

http://arxiv.org/abs/quant-ph/0608008
We prove here a version of Bell Theorem that does not assume locality. As a consequence classical realism, and not locality, is the common source of the violation by nature of all Bell Inequalities.

http://arxiv.org/abs/quant-ph/0503006
In Bohm's version of the EPR gedanken experiment, the spin of the second particle along any vector is minus the spin of the other particle along the same vector. It seems that either the choice of vector along which one projects the spin of the first particle influences at superluminal speed the state of the second particle, or naive realism holds true i.e., the projections of the spin of any EPR particle along all the vectors are determined before any measurement occurs). Naive realism is negated by Bell's theory that originated and is still most often presented as related to non-locality, a relation whose necessity has recently been proven to be false. I advocate here that the solution of the apparent paradox lies in the fact that the spin of the second particle is determined along any vector, but not along all vectors. Such an any-all distinction was already present in quantum mechanics, for instance in the fact that the spin can be measured along any vector but not at once along all vectors, as a result of the Uncertainty Principle. The time symmetry of the any-all distinction defended here is in fact reminiscent of (and I claim, due to) the time symmetry of the Uncertainty Principle described by Einstein, Tolman, and Podolsky in 1931, in a paper entitled ``Knowledge of Past and Future in Quantum Mechanics" that is enough to negate naive realism and to hint at the any-all distinction. A simple classical model is next built, which captures aspects of the any-all distinction: the goal is of course not to have a classical exact model, but to provide a caricature that might help some people.

http://arxiv.org/abs/quant-ph/0501030
We prove here a version of Bell's Theorem that is simpler than any previous one. The contradiction of Bell's inequality with Quantum Mechanics in the new version is not cured by non-locality so that this version allows one to single out classical realism, and not locality, as the common source of all false inequalities of Bell's type.

What implications this could have on modern thoughts?
Does it mean universe is random and unpredictable or is it still the opposite?
 
  • #71
zonde said:
Detection loophole is the fact that interpretation of Bell tests that use photons should relay on fair sampling assumption. This assumption means that correlations in photon pairs where one of two is not detected would be the same (if they would be detected) as for the pairs where both photons were detected.
If that is not so then correlations can be affected by detection of different subsamples under different settings of analyzer.


I thought the point of these experiments was to prove that objective reality doesn't exist, until we start to detect/measure it (although, this can't be/isn't true).
 
  • #72
DrChinese said:
With LR nowhere near those results, which are 100% consistent with QM.

Don't make me come and beat you up! :smile:
Well actually, at the risk of personal injury, I feel compelled to say that the predictions of the most sophisticated LRHV formulations aren't that far from the qm predictions. That is, they're more or less in line, just as the predictions of qm are, with what one might expect given the principles of classical optics. That is, the angular dependence is similar.

Nevertheless, the predicted results are still measurably different. So, I don't know what zonde is saying. I currently believe that the LR program is more or less dead, and I don't think that improving the detection efficiency will resurrect it, because, as I mentioned, even assuming 100% efficiency the qm predictions are still different from the LRHV predictions.

IMO, the stuff by Zeilinger et al. about the importance of closing the detection loophole is just a bunch of BS aimed at procuring contracts and grants from more or less ignorant investors. But then, what do I know?

We'll see what zonde has to say about this.
 
  • #73
No-where-man said:
I thought the point of these experiments was to prove that objective reality doesn't exist, until we start to detect/measure it (although, this can't be/isn't true).
I don't think that that's the point of the experiments -- although some otherwise very wise commenters have said this.

The experiments demonstrate that a certain class of models of quantum entanglement that conform to certain explicit expressions/forms of local realism are incapable of correctly predicting the results of entanglement preparations.

What this means wrt objective reality is still a matter of some dispute. However, the mainstream view seems to be that the experiments don't inform wrt objective reality, but only wrt the specific restrictions on the formulation of models of quantum entanglement.

So, if you're a local realist, then you can still be a local realist and neither qm nor experimental results contradict this view. It's just that you can't model entanglement in terms of the hidden variables that determine individual results -- because coincidental detection isn't determined by the variables that determine individual detection.
 
Last edited:
  • #74
ThomasT said:
Ok, but calculations assuming 100% efficient detection also indicate a measurable difference between qm predictions and LRHV predictions. So, I'm not sure what you're asserting wrt local realism.
Can you say what does it means from perspective of QM that there is 100% efficient detection? Does it have anything to do with wave particle duality?
Then we can speak about calculations assuming 100% efficient detection.

ThomasT said:
I'm not sure what you mean by this. Afaik, wrt the way I learned what I remember of qm :rolleyes:, it doesn't have to do with single photon detections, but only with photon flux wrt large number of trials. And isn't this is also what LRHV models of entanglement are concerned with predicting?
There is difference between statistical ensemble and physical ensemble.
In statistical ensemble you have independent events and all statistics you calculate from statistical ensemble you can apply to individual events as probabilities.

Now what I mean is that QM predictions are not accurate for statistical ensemble of single photon events. It's gives basically the same as effect as making photons distinguishable at double slit i.e. no interference pattern.

ThomasT said:
Ok, I think we agree on this. So exactly what are you referring to when you speak of "local realism"?
I am referring to kind of (physical) ensemble interpretation. Or more exactly I mean that interference is effect of unfair sampling. I have tried to put together things from discussion into single coherent piece http://vixra.org/abs/1109.0052" .
 
Last edited by a moderator:
  • #75
zonde said:
... I have tried to put together things from discussion into single coherent piece http://vixra.org/abs/1109.0052" .

Well, you ignore my question on mainstream science, and then you link to a "paper" on a site that has this policy:
[PLAIN said:
http://vixra.org/]ViXra.org[/PLAIN] is an e-print archive set up as an alternative to the popular arXiv.org service owned by Cornell University. It has been founded by scientists who find they are unable to submit their articles to arXiv.org because of Cornell University's policy of endorsements and moderation designed to filter out e-prints that they consider inappropriate.

ViXra is an open repository for new scientific articles. It does not endorse e-prints accepted on its website, neither does it review them against criteria such as correctness or author's credentials.

Nice.
 
Last edited by a moderator:
  • #76
zonde said:
Can you say what does it means from perspective of QM that there is 100% efficient detection? Does it have anything to do with wave particle duality? Then we can speak about calculations assuming 100% efficient detection.
Regarding the assumption of 100% efficient detection, afaik, when qm and proposed LRHV models of entanglement are compared, this comparison is usually done on the basis of calculations "in the ideal", in order to simplify things and make the comparison more clear.

Anyway, what I was getting at had to do with my current understanding that qm and LRHV (wrt models that are clearly, explicitly local and realistic, ie., Bell-type models) entanglement predictions are necessarily different. And, if so, then if "local realism holds" for quantum entanglement, then it follows that qm doesn't hold for quantum entanglement.

But you noted in post #65 that "QM predictions are tested and they work for inefficient detection case". So, apparently, you're saying that qm holds for quantum entanglement. And what I surmise from this is that you think that there's an LRHV formulation of quantum entanglement that agrees with qm.

This is what I'm not clear about. Are you advocating an LRHV model that's compatible with qm, or are you saying something else?

Let me take a guess. There's an unnacceptable disparity between individual detection efficiency and coincidental detection efficiency. What you're saying is that as these converge, then qm and LRHV correlation ranges will converge, and the correlation curves will become more approximately congruent. Is that what you're saying?

zonde said:
... what I mean is that QM predictions are not accurate for statistical ensemble of single photon events.
If by "statistical ensemble of single photon events" you're referring to either an accumulation of individual results or single individual results taken by themselves, then Bell has already shown that qm and local realism are compatible wrt this.

But you said in post #65 that if local realism holds for quantum entanglement "it would mean that QM predictions are incorrect for the limit of single photon." And you seem to be saying in post #66 that you agree with the idea that qm doesn't apply to single particles.

I don't know what that means. Don't we already know that neither qm nor LRHV models can predict the occurance of single photon detections in optical Bell tests (except when the angular difference of the polarizers is either 0 or 90 degrees and the result at one end is known)? For all other settings, the probability of individual detection is always 1/2 at both ends, ie., the results accumulate randomly.

I don't know if I'm parsing correctly what you're saying. Hopefully this post will enable further clarification. And thanks for the link to your paper. Maybe it will clear things up for me.
 
  • #77
ThomasT said:
Regarding the assumption of 100% efficient detection, afaik, when qm and proposed LRHV models of entanglement are compared, this comparison is usually done on the basis of calculations "in the ideal", in order to simplify things and make the comparison more clear.

Anyway, what I was getting at had to do with my current understanding that qm and LRHV (wrt models that are clearly, explicitly local and realistic, ie., Bell-type models) entanglement predictions are necessarily different. And, if so, then if "local realism holds" for quantum entanglement, then it follows that qm doesn't hold for quantum entanglement.

But you noted in post #65 that "QM predictions are tested and they work for inefficient detection case". So, apparently, you're saying that qm holds for quantum entanglement. And what I surmise from this is that you think that there's an LRHV formulation of quantum entanglement that agrees with qm.

This is what I'm not clear about. Are you advocating an LRHV model that's compatible with qm, or are you saying something else?

Let me take a guess. There's an unnacceptable disparity between individual detection efficiency and coincidental detection efficiency. What you're saying is that as these converge, then qm and LRHV correlation ranges will converge, and the correlation curves will become more approximately congruent. Is that what you're saying?
I am not sure what do you mean with "unnacceptable disparity between individual detection efficiency and coincidental detection efficiency".

So let me say that I am advocating QM interpretation that is compatible with local realism.

QM is rather flexible about it's predictions. For example if we take double slit experiment. Let's say we perform double slit experiment and do not observe any interference pattern. We can say that light was not coherent (basically that means absence of interference) or that two photon paths were distinguishable (another term that means absence of interference).
So QM still holds even if we do not observe any interference.

In similar fashion we can claim (if we want) that QM still holds even if quantum entanglement correlations are reduced to classical correlations in case of efficient detection. Here with classical correlations I mean product of probabilities not Bell's model.

ThomasT said:
If by "statistical ensemble of single photon events" you're referring to either an accumulation of individual results or single individual results taken by themselves, then Bell has already shown that qm and local realism are compatible wrt this.

But you said in post #65 that if local realism holds for quantum entanglement "it would mean that QM predictions are incorrect for the limit of single photon." And you seem to be saying in post #66 that you agree with the idea that qm doesn't apply to single particles.

I don't know what that means. Don't we already know that neither qm nor LRHV models can predict the occurance of single photon detections in optical Bell tests (except when the angular difference of the polarizers is either 0 or 90 degrees and the result at one end is known)? For all other settings, the probability of individual detection is always 1/2 at both ends, ie., the results accumulate randomly.
I am saying that there is difference between statistical "sum" of 1000 experiments with single photon and single experiment with 1000 photons. If first case you can't get interference but in second case you can.
 
  • #78
zonde said:
So let me say that I am advocating QM interpretation that is compatible with local realism.

Wow! This is getting better and better!

Pleeease, what is the name of this "QM interpretation" that refutes the predictions of QM!? :bugeye:

zonde said:
QM is rather flexible about it's predictions.

Really?? :eek: That’s not what I’ve heard... I have always gotten the impression that QM is one of the most accurate physical theories constructed thus far?? Quod Erat Demonstrandum... within ten parts in a billion (10−8).

zonde said:
I am saying that there is difference between statistical "sum" of 1000 experiments with single photon and single experiment with 1000 photons. If first case you can't get interference but in second case you can.

This must be the gem of everything you’ve claimed this far! I’m stunned!

I think you’re in for a Noble (sur)Prize if you can show that throwing ONE DICE 1000 times gives a different outcome compared to throwing 1000 dices ONE TIME!
 
  • #79
zonde said:
I am not sure what do you mean with "unnacceptable disparity between individual detection efficiency and coincidental detection efficiency".
In post #59 you quoted from Ultra-bright source of polarization-entangled photons this,
After passing through adjustable irises, the light was collected using 35mm-focal length doublet lenses, and directed onto single-photon detectors — silicon avalanche photodiodes (EG&G #SPCM’s), with efficiencies of ~ 65% and dark count rates of order 100s−1
and this,
The collection irises for this data were both only 1.76 mm in diameter – the resulting collection efficiency (the probability of collecting one photon conditioned on collecting the other) is then ~ 10%.
and then said,
zonde said:
So while detector efficiency is around 65% coincidence rate was only around 10%. And it is this coincidence rate that is important if we want to speak about closing detection loophole.
.
I supposed that you're saying that the gap between detector efficiency and coincidence rate is due to the efficiency of the coincidence mechanism of the experimental setup, and that as the efficiency of coincidence counting increases, and therefore as detector efficiency and coincidence rate converge, then the predicted (and recorded) qm and LRHV correlation ranges will converge -- thus making the qm and LRHV correlation curves more approximately congruent.

Which would be in line with your statement that,
zonde said:
... let me say that I am advocating QM interpretation that is compatible with local realism.

zonde said:
... we can claim (if we want) that QM still holds even if quantum entanglement correlations are reduced to classical correlations in case of efficient detection. Here with classical correlations I mean product of probabilities not Bell's model.
Ok, so it seems that what you're saying is that given maximally efficient detection (both wrt individual and coincidental counts), then quantum entanglement correlations will be the same as classical correlations. Is that what you're saying?

If so, then can't one just assume maximally efficient detection and calculate whether qm and classical models of entanglement give the same correlation coefficient?

zonde said:
I am saying that there is difference between statistical "sum" of 1000 experiments with single photon and single experiment with 1000 photons.
I don't understand exactly what you mean by this, and how it pertains to what seems to be your contention that quantum entanglement should "reduce to" classical correlations given maximal coincidental detection efficiency. So if you could elaborate, then that would help.

Is this paper, Quantum entanglement and interference from classical statistics relevant to what you're saying? (The author, C. Wetterich, also has other papers at arxiv.org that might pertain.)

And now I'm actually going to read your paper. I just wanted to hash out what you're saying in a minimally technical way first, because I would think that you would want to be able to eventually clearly explain your position to minimally educated, but interested/fascinated laypersons such as myself. :smile:
 
  • #80
ThomasT said:
I supposed that you're saying that the gap between detector efficiency and coincidence rate is due to the efficiency of the coincidence mechanism of the experimental setup, and that as the efficiency of coincidence counting increases, and therefore as detector efficiency and coincidence rate converge, then the predicted (and recorded) qm and LRHV correlation ranges will converge -- thus making the qm and LRHV correlation curves more approximately congruent.
I am certainly not saying that gap between detector efficiency and coincidence rate is due to the efficiency of the coincidence mechanism.
Actually I was not saying anything about the reasons for that difference. What I said is that we need high coincidence rate relative to single photon detections to avoid the need for fair sampling assumption (to close detection loophole).

The reason for that gap roughly is that a lot of photons hitting two detectors are not paired up. So we have photon losses that are not symmetrical and if they happen after polarization analyzer they are subject to fair sampling assumption. And judging by scheme of experiment this is exactly the case for Kwiat experiment. There apertures and interference filters are placed after polarization analyzer.


ThomasT said:
Ok, so it seems that what you're saying is that given maximally efficient detection (both wrt individual and coincidental counts), then quantum entanglement correlations will be the same as classical correlations. Is that what you're saying?

If so, then can't one just assume maximally efficient detection and calculate whether qm and classical models of entanglement give the same correlation coefficient?
I say that in case of efficient detection correlations of polarization entangled photons approach this rule:
P=\frac{1}{2}(cos^2(a)cos^2(b)+sin^2(a)sin^2(b))
and it is classical.

ThomasT said:
I don't understand exactly what you mean by this, and how it pertains to what seems to be your contention that quantum entanglement should "reduce to" classical correlations given maximal coincidental detection efficiency. So if you could elaborate, then that would help.
By this I mean that you can observe photon interference only if you observe many photons. You detect some photons and effect of interference is that you detect more photons on average when there is constructive interference (many photons of ensemble interact with experimental equipment in a way that makes more photons detectable) and less photons when there is destructive interference.
Connection with quantum entanglement is that measurement basis of polarization entangled photons determines if you are measuring photon polarization (H/V basis) or you are measuring interference between photons of different polarizations (+45/-45 basis) i.e. mechanism behind correlations changes as you rotate polarization analyzer.
If interference disappears (efficient detection) then correlations are maximal in H/V basis and are zero in +45/-45 basis.

ThomasT said:
Is this paper, Quantum entanglement and interference from classical statistics relevant to what you're saying? (The author, C. Wetterich, also has other papers at arxiv.org that might pertain.)
I do not see that it is relevant. My judgment is simple - you can speak about viable classical model for quantum entanglement only if you make use of unfair sampling. I didn't see anything like that in this paper so I say it should be faulty.
 
Back
Top