Nick Herbert's Proof: Quantum Non-Locality Explained

  • Thread starter Thread starter harrylin
  • Start date Start date
  • Tags Tags
    Proof
Click For Summary
Nick Herbert's proof of quantum non-locality suggests that assuming local reality leads to a prediction of less than 50% code mismatch at 60 degrees, which appears convincing. The discussion raises questions about potential issues with this proof and the existence of models that could reproduce quantum mechanics' characteristics with greater mismatch. It highlights that many published models exploit the detection loophole, which may not align with Herbert's conclusions. The conversation also emphasizes the distinction between stochastic independence and genuine physical independence in the context of locality, questioning the assumptions underlying Bell-type proofs. Overall, the debate continues on the implications of non-locality and the realism requirement in quantum mechanics.
  • #91
Delta Kilo said:
[..] One hole is direct reference in the 'proof' to the number of mismatches for different angles for the same coded sequence, which leaves it open to the argument that this is not what is measured in the actual experiment.
Except that, as we explained several times, Herbert of course does not imply the same coded sequence. But if you didn't mean that literally, perhaps you referred to the same issue as the imperfect detection mentioned by gil and zonde - and that's not just a hole but a pertinent error in Herbert's presentation.
Another hole is failure to mention statistical character of the 'proof'. He does not say how long the sequence should be for the 'proof' to be valid. Clearly it does not work with sequence of length 1 and has a fair chance of failing with sequences of smallish number of bits. As I'm sure it will be discussed, again and again.
I would not call that a hole as it is implicit in the argumentation; how could anyone who is educated not understand that for example "50/50 random result" refers to a statistical process? Thus I doubt that it will be discussed in this thread - at least, I would not spend more than a single reply if such a groundless objection would be raised.
In the absence of rigorous formulation every word of it is subject to personal interpretation (and there is a lot of words), there is just no way to make convincing arguments about it. QM proof without any math in it. Great, just great.
Delta Kilo Over and Out.
No problem, Over and Out!

Just for completeness: the math is certainly there, implicit in the words; it's inherent to physics that it is concerned with more than just math (for example Newton's Principia and Faraday's theory are mostly words and illustrations, with little pure math).
 
Last edited:
Physics news on Phys.org
  • #92
zonde said:
[...]I think you are missing the positive side of Herbert's fantasy data. [..]
That may be true; I think that his fantasy data are very good as long as it would be stressed that it is not a good reflection of real data processing.
If you are interested in comparing that analysis with mine I have some excel file left from my analysis: see attachment
I would like to, but of course not in a thread on Herbert's proof! Thus, I will now start a thread on that topic.
 
  • #93
lugita15 said:
But the thing is that pretty much every one of the loopholes have been closed seperately, we just haven't managed to close all of them simultaneously in one experiment. So the local determinist is left with having to come up with an explanation of the form "In THIS kind of experiment, the predictions of QM only appear to be right because of THIS loophole, but in THAT kind of experiment, the predictions of QM only appear to be right because of THAT loophole."
Predictions of QM are just math without physical model. But the same math can apply to very different physical situations so the argument that physical situation should be the same because math is the same does not hold water IMHO.

And I think there are arguments why photon should be viewed as radically different from ion. Matter particles are the type that do "communicate" between them as they can form persistent structures. Photons on the other hand are agents of "communication" rather than nodes in structure.

And then there is some philosophical more handwaving type of justification for local realistic explanation. But that would be interesting only if you want to understand local realistic position rather than test it's strength.

lugita15 said:
There's no fundamental reason why it hasn't been closed, it's just that we have more experience doing Bell tests with photons, so people have developed good techniques for switching photon detector settings fast enough that the communication loophole is closed. If I were to guess, I would say that it's more likely that the detection loophole is closed for photon experiments sooner than the communication loophole is closed for ion experiments, if only because more people are going to work on improving photon detectors.
If you will decide to start new topic about loophole free experiments I can propose some interesting papers for discussion:
On ion side:
Bell inequality violation with two remote atomic qubits
An Elementary Quantum Network of Single Atoms in Optical Cavities
On photon side:
Conclusive quantum steering with superconducting transition edge sensors
 
  • #94
harrylin said:
I still don't get it... Herbert's proof doesn't even consider particles, let alone both particles or the same photon pairs.

Here is how I apply Herbert's proof to the scenario of incomplete detection, following his logic by the letter and adding my comments: ...
Step Two: Tilt the A detector till errors reach 25%. This occurs at a mutual misalignment of 30 degrees.

Let us combine this with the other assumptions about how Herberts SPOT works. According to Herberts description of his SPOT detector, detector 1 fires 0% of the time when tilted a 90°, 50% of the time when tilted at 45° and 100% of the time when tilted at 0°. Had he stopped there, it would appear to be linear. However, Herbert goes on to say that detector 1 fires 25% of the time when tilted at 30°. Clearly the functioning of the SPOT detector can not be linear with respect to angle. His own description of the functioning of the detector can not be explained by a linear function.

More later.
 
  • #95
billschnieder said:
Let us combine this with the other assumptions about how Herberts SPOT works. According to Herberts description of his SPOT detector, detector 1 fires 0% of the time when tilted a 90°, 50% of the time when tilted at 45° and 100% of the time when tilted at 0°. Had he stopped there, it would appear to be linear. However, Herbert goes on to say that detector 1 fires 25% of the time when tilted at 30°. Clearly the functioning of the SPOT detector can not be linear with respect to angle. His own description of the functioning of the detector can not be explained by a linear function.

More later.
At first sight, that issue doesn't matter for Herbert's proof. I copy back my overview here, with a little modification based on the later discussion. It seems to me that the bold part is valid no matter of the relationship is linear or not:

----------------------------------------------------------------------------
Step One: Start by aligning both SPOT detectors. No errors are observed.

[Note that, as we next discussed, this is perhaps the main flaw of Herbert's proof, as it implies 100% detection and zero mismatches. But it is interesting to verify "what if":]

[harrylin: for example the sequences go like this:

A 10010110100111010010
B 10010110100111010010]

Step Two: Tilt the A detector till errors reach 25%. This occurs at a mutual misalignment of 30 degrees.

[harrylin: for example (a bit idealized) the sequences go like this:

A 10010100110110110110
B 10110100111010010010

This mismatch could be partly due to the detection of different photon pairs.]

Step Three: Return A detector to its original position (100% match). Now tilt the B detector in the opposite direction till errors reach 25%. This occurs at a mutual misalignment of -30 degrees.

[harrylin: for example the sequences go like this, for the same reasons:

A 10100100101011010011
B 10010101101011010101]

Step Four: Return B detector to its original position (100% match). Now tilt detector A by +30 degrees and detector B by -30 degrees so that the combined angle between them is 60 degrees.

What is now the expected mismatch between the two binary code sequences?

[..] Assuming a local reality means that, for each A photon, whatever hidden mechanism determines the output of Miss A's SPOT detector, the operation of that mechanism cannot depend on the setting of Mr B's distant detector. In other words, in a local world, any changes that occur in Miss A's coded message when she rotates her SPOT detector are caused by her actions alone.
[STRIKE][harrylin: apparently that includes whatever mechanism one could imagine - also non-detection of part of the photons][/STRIKE]
And the same goes for Mr B. [..] So with this restriction in place (the assumption that reality is local), let's calculate the expected mismatch at 60 degrees.

Starting with two completely identical binary messages, if A's 30 degree turn introduces a 25% mismatch and B's 30 degree turn introduces a 25% mismatch, then the total mismatch (when both are turned) can be at most 50%.
In fact the mismatch should be less than 50% because if the two errors happen to occur on the same photon, a mismatch is converted to a match.

[harrylin: and if the errors happen to occur on different photons that are compared, still sometimes a mismatch will be converted to a match. Thus now for example the sequences go like this, for the same reasons as +30 degrees and -30 degrees:

A 10101010110101010011
B 10100100101011010101]
----------------------------------------------------------------------------

It looks to me that the only thing that one has to assume is that there is no conspiracy of photons based on how the detectors are relatively oriented - and even that is taken care of to be prevented in some tests. If you disagree, please detail how two 25% mismatches can, under the suggested ideal conditions, result in more than 50% total mismatch.

Also, you commented elsewhere:
billschnieder said:
[..] - The second issue which I have discussed [..] is that the inequality is derived for possibilities which can never be simultaneously realized (actualized). In principle it is impossible to test experimentally, so trying to take experimental results on the basis that probabilities are the same doesn't make sense. The probabilies may be the same but not simultaneously.
I think that that relates to the same reasonable looking assumption of non-conspiracy - we assume that the moon shines even when we don't look, because it shines whenever we look. Do you claim that the statistics on one side can be affected by what is done on the other side? That appears very "non-local" to me!
 
Last edited:
  • #96
billschnieder said:
Let us combine this with the other assumptions about how Herberts SPOT works. According to Herberts description of his SPOT detector, detector 1 fires 0% of the time when tilted a 90°, 50% of the time when tilted at 45° and 100% of the time when tilted at 0°. Had he stopped there, it would appear to be linear. However, Herbert goes on to say that detector 1 fires 25% of the time when tilted at 30°. Clearly the functioning of the SPOT detector can not be linear with respect to angle. His own description of the functioning of the detector can not be explained by a linear function.
First of all, it should be called sublinearity rather than linearity because the form of the Bell inequality is something plus something is AT MOST something, not something plus something equals something. Second of all, the sublinearity is not an assumption, it is a conclusion of a careful argument. So you can't say that the sublinearity is contrary to experimental results, therefore the argument is invalid. The argument is after all a proof by contradiction. It assumes that local causality underlies quantum mechanical phenomena, uses this assumption to arrive at a conclusion that the mismatches must be sublinear, and then notes that the sublinearity runs contrary to the experimental predictions of QM.
 
  • #97
I'm now looking into simulations that try to imitate the outcomes of experiments such as the one described here, and I found that some seemingly unimportant differences in experimental settings can be of great importance.
Does anyone know of an experiment that exactly reproduced the set-up of Herbert's proof?
That protocol uses 0, +30° and -30° in a special way that is essential for the proof.

As a reminder:
Step One: Start by aligning both SPOT detectors. No errors are observed.
Step Two: Tilt the A detector till errors reach 25%. This occurs at a mutual misalignment of 30 degrees.
Step Three: Return A detector to its original position (100% match). Now tilt the B detector in the opposite direction till errors reach 25%. This occurs at a mutual misalignment of -30 degrees.
Step Four: Return B detector to its original position (100% match). Now tilt detector A by +30 degrees and detector B by -30 degrees so that the combined angle between them is 60 degrees.
 
  • #98
harrylin said:
That protocol uses 0, +30° and -30° in a special way that is essential for the proof.
No, those particular angles aren't essential for the proof at all. We can take any angles a, b, and c. Let R(θ1,θ2) be the error rate when the polarizers are oriented at θ1 and θ2. Then Herbert's proof shows that R(a,c) ≤ R(a,b) + R(b,c).

Choosing equally spaced angles just makes things simple.
 
  • #99
harrylin said:
I'm now looking into simulations that try to imitate the outcomes of experiments such as the one described here
Are these simulations that resorts to loopholes?
 
  • #100
lugita15 said:
No, those particular angles aren't essential for the proof at all. We can take any angles a, b, and c. Let R(θ1,θ2) be the error rate when the polarizers are oriented at θ1 and θ2. Then Herbert's proof shows that R(a,c) ≤ R(a,b) + R(b,c).

Choosing equally spaced angles just makes things simple.
I discovered that more than elsewhere, the devil is in the details. Indeed, it doesn't have to be +30 degrees and -30 degrees; it think that +22 and -30 degrees is just as tough for "local reality"; his argument is not affected by that, I think. However, many experiments used protocols that don't match Herbert's proof. Any one?
lugita15 said:
Are these simulations that resorts to loopholes?
Any simulation that manages to reproduce real observations will do so by employing means to do so - and I would not know which means would not be called "loopholes" by some. I'm interested to verify such simulations with experiments that have actually been performed; but regretfully, many experiments have built-in loopholes by design. Herbert's design of experiment contains perhaps the least pre-baked loopholes, and that makes it challenging. Thus, once more: has his experiment actually been done, as he suggests?

PS: please don't present excuses why such an experiment has (maybe) not been performed; only answer if you can fulfill my request and give such matching data.
 
Last edited:
  • #101
Herbert's proof is a proof of Bell's theorem by consideration of a two-party, two-setting, two-outcome experiment. In other words, a CHSH-style experiment. Every CHSH-style experiment which has been done to date, and which had a successful outcome (a violation of CHSH inequality) suffers from one of the "standard" loopholes, ie failure to comply with rigorous experimental protocol requiring timing, spatial separation, rapid generation of random settings, legal measurement outcomes. Every local-realistic simulation of the data of such an experiment has to exploit one of those loopholes. (Note that in the presence of perfect (anti)correlation in one setting pair, violation of Bell's original inequality and violation of CHSH inequality are equivalent).
 
  • #102
PS experts expect the definitive experiment within 5 years. Top experimental groups in Brisbane, Vienna and Singapore are very clearly systematically working towards this goal (whether or not they say so publicly), and no doubt others are in the race as well.
 
  • #103
harrylin said:
Indeed, it doesn't have to be +30 degrees and -30 degrees; it think that +22 and -30 degrees is just as tough for "local reality"; his argument is not affected by that, I think.
Good, at least we're agreed on that.
However, many experiments used protocols that don't match Herbert's proof. Any one?
Just to be clear, by protocol do you mean his procedure of first aligning both polarizers, then tilting one until you get a certain error rate, then tilting it back and tilting the other one in the opposite direction until you get a certain error rate, and then tilting both in opposite directions? That particular procedure is as irrelevant as the choice of angles a, b, and c. What matters is that you tests the error rates for a and c, a and b, and b and c.
Any simulation that manages to reproduce real observations will do so by employing means to do so - and I would not know which means would not be called "loopholes" by some.
Fair enough, I think it's pretty clear what is and isn't a loophole. Let me ask you this: do the simulations you're examining exploit either the communication loophole or the fact that detection equipment is imperfect?
I'm interested to verify such simulations with experiments that have actually been performed; but regretfully, many experiments have built-in loopholes by design. Herbert's design of experiment contains perhaps the least pre-baked loopholes, and that makes it challenging. Thus, once more: has his experiment actually been done, as he suggests?
When you say "as he suggests", do you specifically want an experiment capable of testing his inequality? Well, his inequality can only be tested if you have 100% detector efficiency. (Otherwise you need the CHSH inequality.) The only experiment to date that achieved that was the ion experiment by Rowe, but that experiment didn't close the communication loophole.

Or do you want an experiment that tested the CHSH inequality instead, but used a more "Herbert-like" setup in whatever sense you mean it?

PS: please don't present excuses why such an experiment has (maybe) not been performed; only answer if you can fulfill my request and give such matching data
It's unclear what you want. If you're looking for a loophole-free Bell test, then we're still working on that.
 
  • #104
In Herbert's setup we know in advance that first we will first do a heap of (0,0) measurements then a heap of (0,30) and so on. If the number of each kind is fixed in advance then it's rather easy to come up with a LHV computer simulation which does the job exactly. Freedom loophole. If the numbers are not known, then you can easily do it if you also use the memory loophole.

I suppose someone who did Herbert's *experiment* wouldn't demand exactly zero error rate in the (0,0) configuration. They'd allow a small error rate. So in effect, test CHSH. CHSH looks at four orrelations. Fix one at +1, and you reduce it to Bell's inequality, which is essentially Herbert.

See arXiv:1207.5103 by RD Gill (me), I uploaded a revised version last night. It will be available from at Tue, 20 Aug 2013 00:00:00 GMT.
 
  • #105
lugita15 said:
[..] Just to be clear, by protocol do you mean his procedure of first aligning both polarizers, then tilting one until you get a certain error rate, then tilting it back and tilting the other one in the opposite direction until you get a certain error rate, and then tilting both in opposite directions? That particular procedure is as irrelevant as the choice of angles a, b, and c. What matters is that you tests the error rates for a and c, a and b, and b and c.
Yes, what matters for me is the kind of angles that are actually tested, as required for his proof. If there was a paper of an experiment that actually followed Nick Herbert's proof as protocol, then it would be easier to explain (and no need to explain). But apparently that hasn't been done...
Fair enough, I think it's pretty clear what is and isn't a loophole. Let me ask you this: do the simulations you're examining exploit either the communication loophole or the fact that detection equipment is imperfect?
No communication loophole is used, and the output signals at 0 degrees offset are 100% if that is what you mean. But this thread is not about simulation programs; my question is about Herbert's proof.
When you say "as he suggests", do you specifically want an experiment capable of testing his inequality? [..] Or do you want an experiment that tested the CHSH inequality instead, but used a more "Herbert-like" setup in whatever sense you mean it?
I ask for the data of an experiment that did what I put in bold face: with set-up I mean a protocol that matches his proof. Likely one or two were done that contain it as a subset. The program that I tested passed a CHSH test with flying colours (there could be an error somewhere of course!) but failed the protocol of Nick Herbert. As Herbert's test is much clearer and simpler, that's what I now focus on.
 
  • #106
gill1109 said:
Herbert's proof is a proof of Bell's theorem by consideration of a two-party, two-setting, two-outcome experiment. In other words, a CHSH-style experiment.
At first sight yes, but I found that details matter as much as they matter with magic tricks (that's one of my hobbies).

gill1109 said:
[...] I suppose someone who did Herbert's *experiment* wouldn't demand exactly zero error rate in the (0,0) configuration. They'd allow a small error rate. So in effect, test CHSH. CHSH looks at four orrelations. Fix one at +1, and you reduce it to Bell's inequality, which is essentially Herbert.

See arXiv:1207.5103 by RD Gill (me), I uploaded a revised version last night. It will be available from at Tue, 20 Aug 2013 00:00:00 GMT.
I'll have a look at that, thanks!
 
  • #107
You're asking for a CHSH style experiment where first one of the four pairs of angles is used for many runs, then a second pair, then a third, then a fourth. First (1,1), then (1,2), then (2,1), finally (2,2). And you want perfect correlation in the first batch of runs.

In a real experiment counting coincidences of detector clicks you'll never see *perfect* correlation if the number of runs is large. You might see near to perfect correlation. What will you do then? Publish a failed experiment?
 
  • #108
gill1109 said:
You're asking for a CHSH style experiment where first one of the four pairs of angles is used for many runs, then a second pair, then a third, then a fourth. First (1,1), then (1,2), then (2,1), finally (2,2). And you want perfect correlation in the first batch of runs.

In a real experiment counting coincidences of detector clicks you'll never see *perfect* correlation if the number of runs is large. You might see near to perfect correlation. What will you do then? Publish a failed experiment?
A set-up isn't an outcome of course, and a near to perfect correlation sounds good to me. However, publication bias as you suggest appears to be a serious problem nowadays... it's a serious risk also with Bell tests. Imagine that Michelson had not published his "failed" experiment!
 
Last edited:
  • #109
Yes, magic tricks! Every disproof of Bell's theorem whether theoretical or by computer simulation is based on a conjuring trick. Combination of sleight of hand, the gift of the gab. That's why the QRC (quantum Randi challenge) was invented.
 
  • #110
gill1109 said:
Yes, magic tricks! Every disproof of Bell's theorem whether theoretical or by computer simulation is based on a conjuring trick. Combination of sleight of hand, the gift of the gab. That's why the QRC (quantum Randi challenge) was invented.
Nick Herbert's experiment remains impressive to me, especially at high efficiency; it's perhaps stronger than CHSH. Some imagined loopholes are just nonsense that could distract the audience and even the experimenters themselves. Ever heard of the fakir who throws up a rope in the sky and disappears in the clouds? Apparently such things have been done, but as always, the real protocol was not exactly like that! I'm a bigger skeptic than Randi. :-p
 
  • #111
Herbert has a proof, not an experiment.

The experiment corresponding to Herbert's proof would be a CHSH experiment with special choice of settings, applied in a special sequence (known in advance), and a more stringent criterium than "violate CHSH inequality". Herbert requires "violate CHSH inequality and get perfect correlation with the first of the four setting pairs".

So it is stronger in just once sense, but weaker in others.
 
  • #112
gill1109 said:
Herbert has a proof, not an experiment.

The experiment corresponding to Herbert's proof would be a CHSH experiment with special choice of settings, applied in a special sequence (known in advance), and a more stringent criterium than "violate CHSH inequality". Herbert requires "violate CHSH inequality and get perfect correlation with the first of the four setting pairs".

So it is stronger in just once sense, but weaker in others.
He makes a claim about physical reality based on experiments which supposedly proved that claim. The sequence plays no role in his proof; however the direct comparison of certain settings does (without mixing in other settings, which could obscure the interpretation). I'll check out your paper tomorrow to see if I can extract relevant data from it or its references.
 
  • #113
harrylin said:
Yes, what matters for me is the kind of angles that are actually tested, as required for his proof.
What do you mean "the kind of angles"? Didn't you just agree with me that the logic of the proof is unaffected by what three angles you choose?
harrylin said:
No I ask for the data of an experiment that did what I put in bold face: with set-up I mean a protocol that matches his proof. Likely one or two were done that contain it as a subset.
Sorry, when did you put something in boldface?

Can you tell me what would or would not count as a "protocol that matches his proof"? I don't even know what you mean by protocol. Do you mean that the experiment should measure the error rate for a and c, a and b, and b and c, or do you want something more demanding?
 
  • #114
lugita15 said:
What do you mean "the kind of angles"? Didn't you just agree with me that the logic of the proof is unaffected by what three angles you choose?
It's the details that matter, see below. Probably that has been done, but yesterday I didn't find such a data set (to my great surprise). Maybe tomorrow.
Sorry, when did you put something in boldface?
Post #97: I made "set-up" bold-face, to stress that I talk about how the test is done.
Can you tell me what would or would not count as a "protocol that matches his proof"? I don't even know what you mean by protocol. Do you mean that the experiment should measure the error rate for a and c, a and b, and b and c, or do you want something more demanding?
Hardly more demanding than that. Getting back to my reminder of yesterday:

'Step One: Start by aligning both SPOT detectors. No errors are observed.
Step Two: Tilt the A detector till errors reach 25%. This occurs at a mutual misalignment of 30 degrees.
Step Three: Return A detector to its original position (100% match). Now tilt the B detector in the opposite direction till errors reach 25%. This occurs at a mutual misalignment of -30 degrees.
Step Four: Return B detector to its original position (100% match). Now tilt detector A by +30 degrees and detector B by -30 degrees so that the combined angle between them is 60 degrees.'

From that I get that for his argument we need at detectors (A, B) data streams from the angle pairs (a a'), (b a'), (a c), and (b c) as a minimum, and it would be nice to repeat (a a') as Herbert suggests. As experimenter I would also throw in once (b b') and (c' c) for better characterization, but it's not necessary. Moreover, typically b and c are <45° angles in opposite directions but I suppose that bigger angles are also fine.
 
Last edited:
  • #115
gill1109 said:
[..] I suppose someone who did Herbert's *experiment* wouldn't demand exactly zero error rate in the (0,0) configuration. They'd allow a small error rate. So in effect, test CHSH. CHSH looks at four orrelations. Fix one at +1, and you reduce it to Bell's inequality, which is essentially Herbert.

See arXiv:1207.5103 by RD Gill (me), I uploaded a revised version last night. It will be available from at Tue, 20 Aug 2013 00:00:00 GMT.
Hi Gill, I now looked at your revised version. Does any of your references contain the data set(s) that I'm after??
 

Similar threads

  • · Replies 95 ·
4
Replies
95
Views
9K
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
Replies
44
Views
5K
Replies
60
Views
11K
  • · Replies 37 ·
2
Replies
37
Views
6K
  • · Replies 23 ·
Replies
23
Views
5K
  • · Replies 3 ·
Replies
3
Views
3K
Replies
26
Views
4K