Nick Herbert's Proof: Quantum Non-Locality Explained

  • Thread starter harrylin
  • Start date
  • Tags
    Proof
In summary: One general issue raised by the debates over locality is to understand the connection between stochastic independence (probabilities multiply) and genuine physical independence (no mutual influence). It is the latter that is at issue in “locality,” butit is the former that goes proxy for it in the Bell-like calculations.The argument presented in the linked article seems convincing.
  • #71
lugita15 said:
I completely agree with you that Herbert worded his conclusion a bit too strongly, because he took for granted that QM is correct in its experimental predictions, an assumption that has overwhelming evidence backing it up, but not definitive proof due to various loopholes like detector efficiency.
The more I become aware about the tricky details and the impressive experimental attempts to disprove "local realism", the more I am impressed by - to borrow some of your phrasing - the equally overwhelming survival of Einstein locality, but not definitive proof* due to various loopholes like detector efficiency and "noise". :wink:

*and of course, in science such a thing as "definitive proof" is anyway hardly possible!
 
Last edited:
Physics news on Phys.org
  • #72
harrylin said:
The more I become aware about the tricky details and the impressive experimental attempts to disprove "local realism", the more I am impressed by - to borrow some of your phrasing - the equally overwhelming survival of Einstein locality, but not definitive proof due to various loopholes like detector efficiency and "noise". :wink:
But the thing is that pretty much every one of the loopholes have been closed seperately, we just haven't managed to close all of them simultaneously in one experiment. So the local determinist is left with having to come up with an explanation of the form "In THIS kind of experiment, the predictions of QM only appear to be right because of THIS loophole, but in THAT kind of experiment, the predictions of QM only appear to be right because of THAT loophole."
 
  • #73
lugita15 said:
But the thing is that pretty much every one of the loopholes have been closed seperately, we just haven't managed to close all of them simultaneously in one experiment. So the local determinist is left with having to come up with an explanation of the form "In THIS kind of experiment, the predictions of QM only appear to be right because of THIS loophole, but in THAT kind of experiment, the predictions of QM only appear to be right because of THAT loophole."
You seem to confuse "local realism" with "local determinism", but that's another topic. What we are concerned with is realistic measurements and different from what I think you suggest, I have not seen evidence of the necessity for more ad hoc "local" explanations of measurement results than for "non-local" explanations. And that's again a different topic than the one that is discussed here, so I'll leave it at that.
 
  • #74
harrylin said:
You seem to confuse "local realism" with "local determinism", but that's another topic.
Yes, sorry about that, I was using ThomasT's terminology. I meant what is normally called local realism.
What we are concerned with is realistic measurements and different from what I think you suggest, I have not seen evidence of the necessity for more ad hoc "local" explanations of measurement results than for "non-local" explanations. And that's again a different topic than the one that is discussed here, so I'll leave it at that.
To be specific, there are ion experiments that close the detection loophole but leave the communication loophole open, and there are photon experiments that close the communication loophole but leave the detection loophole open. So you have to say something like "Photons seem to obey QM when slower-than-light communication is ruled out only because the photon detectors are inefficient, but ions seem to obey QM even with perfectly efficient detection only because slower-than-light communication occurs." Doesn't that seem ad hoc to you?
 
  • #75
lugita15 said:
Yes, sorry about that, I was using ThomasT's terminology. I meant what is normally called local realism.
To be specific, there are ion experiments that close the detection loophole but leave the communication loophole open, and there are photon experiments that close the communication loophole but leave the detection loophole open. So you have to say something like "Photons seem to obey QM when slower-than-light communication is ruled out only because the photon detectors are inefficient, but ions seem to obey QM even with perfectly efficient detection only because slower-than-light communication occurs." Doesn't that seem ad hoc to you?
As I said it's a different topic than Nick Herbert's proof. Please start that topic with a new thread - thanks in advance!
 
  • #76
lugita15 said:
To be specific, there are ion experiments that close the detection loophole but leave the communication loophole open, and there are photon experiments that close the communication loophole but leave the detection loophole open. So you have to say something like "Photons seem to obey QM when slower-than-light communication is ruled out only because the photon detectors are inefficient, but ions seem to obey QM even with perfectly efficient detection only because slower-than-light communication occurs." Doesn't that seem ad hoc to you?
How (why) do (must) the ion experiments leave the communication loophole open? I don't know anything about these experiments, so I'm just asking. Why can't they close the communication loophole in the ion experiments ... seeing as how that loophole has been closed in other experiments?
 
  • #77
ThomasT said:
How (why) do (must) the ion experiments leave the communication loophole open? I don't know anything about these experiments, so I'm just asking. Why can't they close the communication loophole in the ion experiments ... seeing as how that loophole has been closed in other experiments?
There's no fundamental reason why it hasn't been closed, it's just that we have more experience doing Bell tests with photons, so people have developed good techniques for switching photon detector settings fast enough that the communication loophole is closed. If I were to guess, I would say that it's more likely that the detection loophole is closed for photon experiments sooner than the communication loophole is closed for ion experiments, if only because more people are going to work on improving photon detectors.
 
  • #78
harrylin said:
Note: in a subsequent thread on why hidden variables imply a linear relationship, it became clear that imperfect detection is not the only flaw in Herbert's claim about facts of measurement reality; another issue that Herbert missed is the effect of data picking such as with time coincidence windows.
- https://www.physicsforums.com/showthread.php?t=589923&page=6
Coincidence time loophole is just the same about imperfect matching of pairs. Say if instead of talking about "detection loophole" you would talk about violation of "fair sampling assumption" then it would cover coincidence time loophole just as well.

On the practical side for this coincidence time loophole you would predict some relevant detections outside coincidence time window. And that can be tested with Weihs et al data.
I got one dataset from Weihs experiment (some time ago there was one publicly available) loaded it in mysql database and then fooled around with different queries for quite some time. And I found that first - as you increase coincidence time window (beyond certain value of few ns) correlations diminish at the level that you would expect from random detections, second - detection times do not correlate beyond some small time interval. Deviations in that small time interval are explained as detector jitter.
Have to say that this detector jitter seemed rather very "unfair" in respect to different polarization settings (and that might be what Raedt saw in data). But I could not do any further tests without more datasets.
 
  • #79
zonde said:
Coincidence time loophole is just the same about imperfect matching of pairs. Say if instead of talking about "detection loophole" you would talk about violation of "fair sampling assumption" then it would cover coincidence time loophole just as well. On the practical side for this coincidence time loophole you would predict some relevant detections outside coincidence time window. And that can be tested with Weihs et al data.
I understand "detection loophole" to mean incomplete detection due to detector inefficiencies, and I find it very instructive to distinguish that from data picking. Those are very different things - indeed, Weih's data illustrate the importance of that distinction rather well, it was even what I had in mind.
I got one dataset from Weihs experiment (some time ago there was one publicly available) loaded it in mysql database and then fooled around with different queries for quite some time.
Could you make that dataset available to physicsforums? I think that it's very instructive for this group to have access to real data instead of fantasy data such as presented by Herbert.
And I found that first - as you increase coincidence time window (beyond certain value of few ns) correlations diminish at the level that you would expect from random detections, second - detection times do not correlate beyond some small time interval. Deviations in that small time interval are explained as detector jitter. Have to say that this detector jitter seemed rather very "unfair" in respect to different polarization settings [..]
That's the third explanation that I see (earlier ones that I saw were "noise" and "non-entangled pairs"); I guess it's really high time to start a topic about ad hoc explanations!
 
  • #80
Nick Herbert's co-called "proof" is a nice introductory pop-sci illustration of Bell's inequality. It is certainly not a proof of anything, let alone replacement of Bell's theorem. It does not have a clear statement of either assumptions or conclusions. And the key sentence of the "proof" does not stand up to scrutiny:
Starting with two completely identical binary messages, if A's 30 degree turn introduces a 25% mismatch and B's 30 degree turn introduces a 25% mismatch, then the total mismatch (when both are turned) can be at most 50%.
Now, where do these two sequences come from? And why would the difference be 25%? 25% is only on average, in theory it is possible (however unlikely) to get any value between 0% and 100%. And how can we possibly compare sequences at different angle settings when they keep changing every time? Frankly, I'm surprised billschnieder didn't rip it apart.

The answer of course is that one should not assume the same sequence when measuring different angle settings. Instead one must consider all possible sequences according to their probability distribution. Welcome λ, ρ(λ) and the rest of Bell's proof. As a result, the inequality only holds statistically, that is the probability of violating it goes towards zero with sequence length going to ∞, but never vanishes completely.
 
  • #81
Delta Kilo said:
Nick Herbert's co-called "proof" is a nice introductory pop-sci illustration of Bell's inequality. It is certainly not a proof of anything, let alone replacement of Bell's theorem. It does not have a clear statement of either assumptions or conclusions.
I agree that Herbert's argument is worded rather informally, but I think his reasoning is fundamentally sound. In my blog post here, I try to restate his logic a bit more precisely, but only a bit, because I think it's mostly fine as it is.
Delta Kilo said:
Now, where do these two sequences come from? And why would the difference be 25%? 25% is only on average, in theory it is possible (however unlikely) to get any value between 0% and 100%. And how can we possibly compare sequences at different angle settings when they keep changing every time? Frankly, I'm surprised billschnieder didn't rip it apart.
A 25% mismatch is just an informal way of saying that the probability of mismatch between corresponding bits of the two measured sequences is 25%. If that's not clear in Herbert's exposition, I hope I made that clear in my blog post.
Delta Kilo said:
The answer of course is that one should not assume the same sequence when measuring different angle settings.
Well, the notion that mismatches between the two sequences really represents deviation from a common 0-degree binary "message" is deduced from the fact that you get identical behavior at identical polarizer settings, and thus assuming counterfactual definiteness (and excluding superdeterminism) we can conclude that even when we don't turn the polarizers to the same angle, it is still true that we WOULD have gotten identical behavior if we HAD turned the polarizers to identical angles. And if you believe in locality, the only way this is possible is for the two photons in each photon pair to have agreed in advance exactly what angles to both go through and what angles not to go through. If, for a particular photon pair, they have agreed that they should go through at 0 degrees, that is represented as a 1; otherwise, it would a 0. This is the the 0-degree binary message, and mismatches when you turn the polarizer settings away from 0 degrees are supposed to indicate deviations from this.
Delta Kilo said:
As a result, the inequality only holds statistically, that is the probability of violating it goes towards zero with sequence length going to ∞, but never vanishes completely.
Of course Herbert's proof is about probabilities. The Bell inequality in his proof can be stated as, the probability of mismatch at 60 degrees is less than or equal to twice the probability of mismatch at 30 degree. Now of course, like any probability, you can't find the probability with perfect accuracy using only finitely many runs of the experiment. It's just like when you flip a fair coin a billion times, you're not guaranteed to get exactly half heads and half tails. You just have to try and extrapolate the observed probability in the limit as n goes to infinity. This is nothing special about Bell in particular.
 
  • #82
Delta Kilo said:
[..]
The answer of course is that one should not assume the same sequence when measuring different angle settings. Instead one must consider all possible sequences according to their probability distribution. Welcome λ, ρ(λ) and the rest of Bell's proof. As a result, the inequality only holds statistically, that is the probability of violating it goes towards zero with sequence length going to ∞, but never vanishes completely.
He doesn't assume the same sequence and neither does he need to use unknown variables or probability distributions of them - good riddance! - that's the part that I appreciate.
Instead he makes claims (overly simplified though) about the observed statistical correlations and what a non-spooky model could predict for such correlations. He even doesn't need probability analysis, only the most basic understanding of statistical behaviour.
 
  • #83
harrylin said:
He doesn't assume the same sequence and neither does he need to use unknown variables or probability distributions of them - good riddance! - that's the part that I appreciate.
I guess we have a difference of opinion on this point. But for the benefit of others, here was my response to you earlier this thread:
lugita15 said:
Let me try again. This is the crucial step where counterfactual definiteness is invoked: "Starting with two completely identical binary messages, if A's 30 degree turn introduces a 25% mismatch and B's 30 degree turn introduces a 25% mismatch." He is very clear, A's 30 degreee turn introduces a 25% mismatch from the 0 degree binary message. He is saying this even in the case when B has also turned his detector, so that no one is actually measuring this particular bit of the 0 degree binary message. The only bits of the 0 degree binary message that are actually observed are the ones for which one of the detectors was turned to 0 degrees. And yet he is asserting that even when neither of the detectors are pointed at 0 degrees, the mismatches between the two detectors still represent errors from the 0 degree binary message. Isn't discussion of deviation from unmeasured bits of a binary message a clear case of counterfactual definiteness, AKA realism?
 
  • #84
lugita15 said:
Of course Herbert's proof is about probabilities.
If it is, he fails to mention that. Words such as 'probability', 'distribution', 'statistics', 'expectation' are conspicuously absent from his text.

What I'm saying, we know what he says is basically right, because it is backed by the machinery of Bell's mathematical proof. Without it, if taken literally, it is full of holes you can ride an elephant through. As such, it is susceptible to the factorization argument, you know the kind usually pushed by Bill here, to which the original Bell's proof is immune.

And since Herbert's 'proof' has neither assumptions nor conclusions, people can argue about the meaning of it until the Second Coming of the Great Prophet Zarquon
 
  • #85
lugita15 said:
I guess we have a difference of opinion on this point. But for the benefit of others, here was my response to you earlier this thread:
[..] "Starting with two completely identical binary messages, if A's 30 degree turn introduces a 25% mismatch and B's 30 degree turn introduces a 25% mismatch."
Although we surely disagreed about what Herbert did not write, I think that we fully agreed about what Herbert wrote - and this is also what I referred to in my reply to Delta Kilo. Perhaps DK understood "completely identical" to mean a fixed code that can be repeated, but obviously that isn't what is measured nor what Herbert meant.
 
  • #86
Delta Kilo said:
If it is, he fails to mention that. Words such as 'probability', 'distribution', 'statistics', 'expectation' are conspicuously absent from his text.

What I'm saying, we know what he says is basically right, because it is backed by the machinery of Bell's mathematical proof. Without it, if taken literally, it is full of holes you can ride an elephant through. As such, it is susceptible to the factorization argument, you know the kind usually pushed by Bill here, to which the original Bell's proof is immune.

And since Herbert's 'proof' has neither assumptions nor conclusions, people can argue about the meaning of it until the Second Coming of the Great Prophet Zarquon
Rather than waiting for Zarquon, why don't you look at the restatement of Herbert's proof in my blog post here, and tell me what flaws or gaps you see in that?
 
  • #87
Delta Kilo said:
If it is, he fails to mention that. Words such as 'probability', 'distribution', 'statistics', 'expectation' are conspicuously absent from his text.
Words like "distribution" and "statistics" are replaced in Herbert's text by the applied statistic observation terms such as "seemingly random 50/50 sequence of zeros and ones", "Match between the two sequences", etc.
[..] if taken literally, it is full of holes you can ride an elephant through. As such, it is susceptible to the factorization argument, you know the kind usually pushed by Bill here [..]
I'm afraid that apart of a misrepresentation of real data from optics experiments (thanks again Zonde - and also thanks gil, sorry that I couldn't follow your explanation!), I don't know any of those "holes" although I asked for them. Please elaborate!
And since Herbert's 'proof' has neither assumptions nor conclusions [..]
Sure it has, as discussed earlier in this thread.
 
Last edited:
  • #88
harrylin said:
Although we surely disagreed about what Herbert did not write, I think that we fully agreed about what Herbert wrote - and this is also what I referred to in my reply to Delta Kilo. Perhaps DK understood "completely identical" to mean a fixed code that can be repeated, but obviously that isn't what is measured nor what Herbert meant.
I assumed Delta Kilo was just referring to the bits of the 0 degree binary message, which are shared by the two particles (that is, the Nth bit of the 0 degree binary message is shared by the 2 photons in the Nth particle pair).
 
  • #89
harrylin said:
Words like "distribution" and "statistics" are replaced in Herbert's text by the applied statistic observation terms such as "seemingly random 50/50 sequence of zeros and ones", "Match between the two sequences", etc.
I'll leave it to linguists to figure out.

harrylin said:
I don't know any of those "holes" although I asked for them. Please elaborate!
One hole is direct reference in the 'proof' to the number of mismatches for different angles for the same coded sequence, which leaves it open to the argument that this is not what is measured in the actual experiment. And unlike Bell, in this particular case the argument is valid.

Another hole is failure to mention statistical character of the 'proof'. He does not say how long the sequence should be for the 'proof' to be valid. Clearly it does not work with sequence of length 1 and has a fair chance of failing with sequences of smallish number of bits.

harrylin said:
Sure it has, as discussed earlier in this thread.
As I'm sure it will be discussed, again and again. In the absence of rigorous formulation every word of it is subject to personal interpretation (and there is a lot of words), there is just no way to make convincing arguments about it. QM proof without any math in it. Great, just great.

Delta Kilo Over and Out.
 
  • #90
zonde said:
Deviations in that small time interval are explained as detector jitter.
Have to say that this detector jitter seemed rather very "unfair" in respect to different polarization settings (and that might be what Raedt saw in data). But I could not do any further tests without more datasets.
I wrote this from memory (I made analysis ~3 years ago) and have to correct that this jitter seems much more likely to come from electronics instead of detector. And calling it "unfair" might be not very correct as it can have local explanation.

harrylin said:
I understand "detection loophole" to mean incomplete detection due to detector inefficiencies, and I find it very instructive to distinguish that from data picking. Those are very different things - indeed, Weih's data illustrate the importance of that distinction rather well, it was even what I had in mind.

Could you make that dataset available to physicsforums? I think that it's very instructive for this group to have access to real data instead of fantasy data such as presented by Herbert.
I think you are missing the positive side of Herbert's fantasy data. Real experiments have a lot of different imperfections and it is really good to have some simple baseline that can help you sort important things from other things.

But you can PM me and I will send you the dataset.

harrylin said:
That's the third explanation that I see (earlier ones that I saw were "noise" and "non-entangled pairs"); I guess it's really high time to start a topic about ad hoc explanations!
Take a look at this paper:
A Close Look at the EPR Data of Weihs et al
It basically does analogous analysis that I have made.

And there is another one from the same author:
Explaining Counts from EPRB Experiments: Are They Consistent with Quantum Theory?

If you are interested in comparing that analysis with mine I have some excel file left from my analysis: see attachment
 

Attachments

  • Weihs_table.xlsx.zip
    58 KB · Views: 180
  • #91
Delta Kilo said:
[..] One hole is direct reference in the 'proof' to the number of mismatches for different angles for the same coded sequence, which leaves it open to the argument that this is not what is measured in the actual experiment.
Except that, as we explained several times, Herbert of course does not imply the same coded sequence. But if you didn't mean that literally, perhaps you referred to the same issue as the imperfect detection mentioned by gil and zonde - and that's not just a hole but a pertinent error in Herbert's presentation.
Another hole is failure to mention statistical character of the 'proof'. He does not say how long the sequence should be for the 'proof' to be valid. Clearly it does not work with sequence of length 1 and has a fair chance of failing with sequences of smallish number of bits. As I'm sure it will be discussed, again and again.
I would not call that a hole as it is implicit in the argumentation; how could anyone who is educated not understand that for example "50/50 random result" refers to a statistical process? Thus I doubt that it will be discussed in this thread - at least, I would not spend more than a single reply if such a groundless objection would be raised.
In the absence of rigorous formulation every word of it is subject to personal interpretation (and there is a lot of words), there is just no way to make convincing arguments about it. QM proof without any math in it. Great, just great.
Delta Kilo Over and Out.
No problem, Over and Out!

Just for completeness: the math is certainly there, implicit in the words; it's inherent to physics that it is concerned with more than just math (for example Newton's Principia and Faraday's theory are mostly words and illustrations, with little pure math).
 
Last edited:
  • #92
zonde said:
[...]I think you are missing the positive side of Herbert's fantasy data. [..]
That may be true; I think that his fantasy data are very good as long as it would be stressed that it is not a good reflection of real data processing.
If you are interested in comparing that analysis with mine I have some excel file left from my analysis: see attachment
I would like to, but of course not in a thread on Herbert's proof! Thus, I will now start a thread on that topic.
 
  • #93
lugita15 said:
But the thing is that pretty much every one of the loopholes have been closed seperately, we just haven't managed to close all of them simultaneously in one experiment. So the local determinist is left with having to come up with an explanation of the form "In THIS kind of experiment, the predictions of QM only appear to be right because of THIS loophole, but in THAT kind of experiment, the predictions of QM only appear to be right because of THAT loophole."
Predictions of QM are just math without physical model. But the same math can apply to very different physical situations so the argument that physical situation should be the same because math is the same does not hold water IMHO.

And I think there are arguments why photon should be viewed as radically different from ion. Matter particles are the type that do "communicate" between them as they can form persistent structures. Photons on the other hand are agents of "communication" rather than nodes in structure.

And then there is some philosophical more handwaving type of justification for local realistic explanation. But that would be interesting only if you want to understand local realistic position rather than test it's strength.

lugita15 said:
There's no fundamental reason why it hasn't been closed, it's just that we have more experience doing Bell tests with photons, so people have developed good techniques for switching photon detector settings fast enough that the communication loophole is closed. If I were to guess, I would say that it's more likely that the detection loophole is closed for photon experiments sooner than the communication loophole is closed for ion experiments, if only because more people are going to work on improving photon detectors.
If you will decide to start new topic about loophole free experiments I can propose some interesting papers for discussion:
On ion side:
Bell inequality violation with two remote atomic qubits
An Elementary Quantum Network of Single Atoms in Optical Cavities
On photon side:
Conclusive quantum steering with superconducting transition edge sensors
 
  • #94
harrylin said:
I still don't get it... Herbert's proof doesn't even consider particles, let alone both particles or the same photon pairs.

Here is how I apply Herbert's proof to the scenario of incomplete detection, following his logic by the letter and adding my comments: ...
Step Two: Tilt the A detector till errors reach 25%. This occurs at a mutual misalignment of 30 degrees.

Let us combine this with the other assumptions about how Herberts SPOT works. According to Herberts description of his SPOT detector, detector 1 fires 0% of the time when tilted a 90°, 50% of the time when tilted at 45° and 100% of the time when tilted at 0°. Had he stopped there, it would appear to be linear. However, Herbert goes on to say that detector 1 fires 25% of the time when tilted at 30°. Clearly the functioning of the SPOT detector can not be linear with respect to angle. His own description of the functioning of the detector can not be explained by a linear function.

More later.
 
  • #95
billschnieder said:
Let us combine this with the other assumptions about how Herberts SPOT works. According to Herberts description of his SPOT detector, detector 1 fires 0% of the time when tilted a 90°, 50% of the time when tilted at 45° and 100% of the time when tilted at 0°. Had he stopped there, it would appear to be linear. However, Herbert goes on to say that detector 1 fires 25% of the time when tilted at 30°. Clearly the functioning of the SPOT detector can not be linear with respect to angle. His own description of the functioning of the detector can not be explained by a linear function.

More later.
At first sight, that issue doesn't matter for Herbert's proof. I copy back my overview here, with a little modification based on the later discussion. It seems to me that the bold part is valid no matter of the relationship is linear or not:

----------------------------------------------------------------------------
Step One: Start by aligning both SPOT detectors. No errors are observed.

[Note that, as we next discussed, this is perhaps the main flaw of Herbert's proof, as it implies 100% detection and zero mismatches. But it is interesting to verify "what if":]

[harrylin: for example the sequences go like this:

A 10010110100111010010
B 10010110100111010010]

Step Two: Tilt the A detector till errors reach 25%. This occurs at a mutual misalignment of 30 degrees.

[harrylin: for example (a bit idealized) the sequences go like this:

A 10010100110110110110
B 10110100111010010010

This mismatch could be partly due to the detection of different photon pairs.]

Step Three: Return A detector to its original position (100% match). Now tilt the B detector in the opposite direction till errors reach 25%. This occurs at a mutual misalignment of -30 degrees.

[harrylin: for example the sequences go like this, for the same reasons:

A 10100100101011010011
B 10010101101011010101]

Step Four: Return B detector to its original position (100% match). Now tilt detector A by +30 degrees and detector B by -30 degrees so that the combined angle between them is 60 degrees.

What is now the expected mismatch between the two binary code sequences?

[..] Assuming a local reality means that, for each A photon, whatever hidden mechanism determines the output of Miss A's SPOT detector, the operation of that mechanism cannot depend on the setting of Mr B's distant detector. In other words, in a local world, any changes that occur in Miss A's coded message when she rotates her SPOT detector are caused by her actions alone.
[STRIKE][harrylin: apparently that includes whatever mechanism one could imagine - also non-detection of part of the photons][/STRIKE]
And the same goes for Mr B. [..] So with this restriction in place (the assumption that reality is local), let's calculate the expected mismatch at 60 degrees.

Starting with two completely identical binary messages, if A's 30 degree turn introduces a 25% mismatch and B's 30 degree turn introduces a 25% mismatch, then the total mismatch (when both are turned) can be at most 50%.
In fact the mismatch should be less than 50% because if the two errors happen to occur on the same photon, a mismatch is converted to a match.

[harrylin: and if the errors happen to occur on different photons that are compared, still sometimes a mismatch will be converted to a match. Thus now for example the sequences go like this, for the same reasons as +30 degrees and -30 degrees:

A 10101010110101010011
B 10100100101011010101]
----------------------------------------------------------------------------

It looks to me that the only thing that one has to assume is that there is no conspiracy of photons based on how the detectors are relatively oriented - and even that is taken care of to be prevented in some tests. If you disagree, please detail how two 25% mismatches can, under the suggested ideal conditions, result in more than 50% total mismatch.

Also, you commented elsewhere:
billschnieder said:
[..] - The second issue which I have discussed [..] is that the inequality is derived for possibilities which can never be simultaneously realized (actualized). In principle it is impossible to test experimentally, so trying to take experimental results on the basis that probabilities are the same doesn't make sense. The probabilies may be the same but not simultaneously.
I think that that relates to the same reasonable looking assumption of non-conspiracy - we assume that the moon shines even when we don't look, because it shines whenever we look. Do you claim that the statistics on one side can be affected by what is done on the other side? That appears very "non-local" to me!
 
Last edited:
  • #96
billschnieder said:
Let us combine this with the other assumptions about how Herberts SPOT works. According to Herberts description of his SPOT detector, detector 1 fires 0% of the time when tilted a 90°, 50% of the time when tilted at 45° and 100% of the time when tilted at 0°. Had he stopped there, it would appear to be linear. However, Herbert goes on to say that detector 1 fires 25% of the time when tilted at 30°. Clearly the functioning of the SPOT detector can not be linear with respect to angle. His own description of the functioning of the detector can not be explained by a linear function.
First of all, it should be called sublinearity rather than linearity because the form of the Bell inequality is something plus something is AT MOST something, not something plus something equals something. Second of all, the sublinearity is not an assumption, it is a conclusion of a careful argument. So you can't say that the sublinearity is contrary to experimental results, therefore the argument is invalid. The argument is after all a proof by contradiction. It assumes that local causality underlies quantum mechanical phenomena, uses this assumption to arrive at a conclusion that the mismatches must be sublinear, and then notes that the sublinearity runs contrary to the experimental predictions of QM.
 
  • #97
I'm now looking into simulations that try to imitate the outcomes of experiments such as the one described here, and I found that some seemingly unimportant differences in experimental settings can be of great importance.
Does anyone know of an experiment that exactly reproduced the set-up of Herbert's proof?
That protocol uses 0, +30° and -30° in a special way that is essential for the proof.

As a reminder:
Step One: Start by aligning both SPOT detectors. No errors are observed.
Step Two: Tilt the A detector till errors reach 25%. This occurs at a mutual misalignment of 30 degrees.
Step Three: Return A detector to its original position (100% match). Now tilt the B detector in the opposite direction till errors reach 25%. This occurs at a mutual misalignment of -30 degrees.
Step Four: Return B detector to its original position (100% match). Now tilt detector A by +30 degrees and detector B by -30 degrees so that the combined angle between them is 60 degrees.
 
  • #98
harrylin said:
That protocol uses 0, +30° and -30° in a special way that is essential for the proof.
No, those particular angles aren't essential for the proof at all. We can take any angles a, b, and c. Let R(θ1,θ2) be the error rate when the polarizers are oriented at θ1 and θ2. Then Herbert's proof shows that R(a,c) ≤ R(a,b) + R(b,c).

Choosing equally spaced angles just makes things simple.
 
  • #99
harrylin said:
I'm now looking into simulations that try to imitate the outcomes of experiments such as the one described here
Are these simulations that resorts to loopholes?
 
  • #100
lugita15 said:
No, those particular angles aren't essential for the proof at all. We can take any angles a, b, and c. Let R(θ1,θ2) be the error rate when the polarizers are oriented at θ1 and θ2. Then Herbert's proof shows that R(a,c) ≤ R(a,b) + R(b,c).

Choosing equally spaced angles just makes things simple.
I discovered that more than elsewhere, the devil is in the details. Indeed, it doesn't have to be +30 degrees and -30 degrees; it think that +22 and -30 degrees is just as tough for "local reality"; his argument is not affected by that, I think. However, many experiments used protocols that don't match Herbert's proof. Any one?
lugita15 said:
Are these simulations that resorts to loopholes?
Any simulation that manages to reproduce real observations will do so by employing means to do so - and I would not know which means would not be called "loopholes" by some. I'm interested to verify such simulations with experiments that have actually been performed; but regretfully, many experiments have built-in loopholes by design. Herbert's design of experiment contains perhaps the least pre-baked loopholes, and that makes it challenging. Thus, once more: has his experiment actually been done, as he suggests?

PS: please don't present excuses why such an experiment has (maybe) not been performed; only answer if you can fulfill my request and give such matching data.
 
Last edited:
  • #101
Herbert's proof is a proof of Bell's theorem by consideration of a two-party, two-setting, two-outcome experiment. In other words, a CHSH-style experiment. Every CHSH-style experiment which has been done to date, and which had a successful outcome (a violation of CHSH inequality) suffers from one of the "standard" loopholes, ie failure to comply with rigorous experimental protocol requiring timing, spatial separation, rapid generation of random settings, legal measurement outcomes. Every local-realistic simulation of the data of such an experiment has to exploit one of those loopholes. (Note that in the presence of perfect (anti)correlation in one setting pair, violation of Bell's original inequality and violation of CHSH inequality are equivalent).
 
  • #102
PS experts expect the definitive experiment within 5 years. Top experimental groups in Brisbane, Vienna and Singapore are very clearly systematically working towards this goal (whether or not they say so publicly), and no doubt others are in the race as well.
 
  • #103
harrylin said:
Indeed, it doesn't have to be +30 degrees and -30 degrees; it think that +22 and -30 degrees is just as tough for "local reality"; his argument is not affected by that, I think.
Good, at least we're agreed on that.
However, many experiments used protocols that don't match Herbert's proof. Any one?
Just to be clear, by protocol do you mean his procedure of first aligning both polarizers, then tilting one until you get a certain error rate, then tilting it back and tilting the other one in the opposite direction until you get a certain error rate, and then tilting both in opposite directions? That particular procedure is as irrelevant as the choice of angles a, b, and c. What matters is that you tests the error rates for a and c, a and b, and b and c.
Any simulation that manages to reproduce real observations will do so by employing means to do so - and I would not know which means would not be called "loopholes" by some.
Fair enough, I think it's pretty clear what is and isn't a loophole. Let me ask you this: do the simulations you're examining exploit either the communication loophole or the fact that detection equipment is imperfect?
I'm interested to verify such simulations with experiments that have actually been performed; but regretfully, many experiments have built-in loopholes by design. Herbert's design of experiment contains perhaps the least pre-baked loopholes, and that makes it challenging. Thus, once more: has his experiment actually been done, as he suggests?
When you say "as he suggests", do you specifically want an experiment capable of testing his inequality? Well, his inequality can only be tested if you have 100% detector efficiency. (Otherwise you need the CHSH inequality.) The only experiment to date that achieved that was the ion experiment by Rowe, but that experiment didn't close the communication loophole.

Or do you want an experiment that tested the CHSH inequality instead, but used a more "Herbert-like" setup in whatever sense you mean it?

PS: please don't present excuses why such an experiment has (maybe) not been performed; only answer if you can fulfill my request and give such matching data
It's unclear what you want. If you're looking for a loophole-free Bell test, then we're still working on that.
 
  • #104
In Herbert's setup we know in advance that first we will first do a heap of (0,0) measurements then a heap of (0,30) and so on. If the number of each kind is fixed in advance then it's rather easy to come up with a LHV computer simulation which does the job exactly. Freedom loophole. If the numbers are not known, then you can easily do it if you also use the memory loophole.

I suppose someone who did Herbert's *experiment* wouldn't demand exactly zero error rate in the (0,0) configuration. They'd allow a small error rate. So in effect, test CHSH. CHSH looks at four orrelations. Fix one at +1, and you reduce it to Bell's inequality, which is essentially Herbert.

See arXiv:1207.5103 by RD Gill (me), I uploaded a revised version last night. It will be available from at Tue, 20 Aug 2013 00:00:00 GMT.
 
  • #105
lugita15 said:
[..] Just to be clear, by protocol do you mean his procedure of first aligning both polarizers, then tilting one until you get a certain error rate, then tilting it back and tilting the other one in the opposite direction until you get a certain error rate, and then tilting both in opposite directions? That particular procedure is as irrelevant as the choice of angles a, b, and c. What matters is that you tests the error rates for a and c, a and b, and b and c.
Yes, what matters for me is the kind of angles that are actually tested, as required for his proof. If there was a paper of an experiment that actually followed Nick Herbert's proof as protocol, then it would be easier to explain (and no need to explain). But apparently that hasn't been done...
Fair enough, I think it's pretty clear what is and isn't a loophole. Let me ask you this: do the simulations you're examining exploit either the communication loophole or the fact that detection equipment is imperfect?
No communication loophole is used, and the output signals at 0 degrees offset are 100% if that is what you mean. But this thread is not about simulation programs; my question is about Herbert's proof.
When you say "as he suggests", do you specifically want an experiment capable of testing his inequality? [..] Or do you want an experiment that tested the CHSH inequality instead, but used a more "Herbert-like" setup in whatever sense you mean it?
I ask for the data of an experiment that did what I put in bold face: with set-up I mean a protocol that matches his proof. Likely one or two were done that contain it as a subset. The program that I tested passed a CHSH test with flying colours (there could be an error somewhere of course!) but failed the protocol of Nick Herbert. As Herbert's test is much clearer and simpler, that's what I now focus on.
 

Similar threads

Replies
2
Views
945
  • Quantum Physics
3
Replies
95
Views
8K
  • Quantum Interpretations and Foundations
2
Replies
44
Views
1K
Replies
5
Views
2K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
Replies
60
Views
10K
  • Quantum Physics
Replies
5
Views
3K
  • Beyond the Standard Models
Replies
3
Views
3K
Replies
23
Views
5K
Replies
26
Views
4K
Back
Top