Nick Herbert's Proof: Quantum Non-Locality Explained

  • Thread starter Thread starter harrylin
  • Start date Start date
  • Tags Tags
    Proof
  • #51
"So Herbert is assuming it makes sense to ask what *would have* occurred if you oriented one the SPOT detectors at 0 degree angle, even when the detectors are actually oriented at -30 degrees and 30 degrees. So the assumption is that regardless of what measurements you actually do, there are still well-defined answers (although unknown to the experimenters) for the results of measurements you did not do."

That is it, spot on. That is what people call "realism". After that, the notion of "locality" is applied to those counterfactual outcomes of the non-performed measurements.
 
Physics news on Phys.org
  • #52
Nobel laureate Gerard t'Hooft (who I have talked to about this a number of times) is a superdeterminist when we are talking about the quantum world and what might be below or behind it at even smaller scales; what he apparently can't realize is that Bell's argument applies to objects in the macroscopic world, or supposed macroscopic world - actual detector clicks and the clicks which the detectors would have made if they had been aligned differently.
 
  • #53
gill1109 said:
Nobel laureate Gerard t'Hooft (who I have talked to about this a number of times) is a superdeterminist when we are talking about the quantum world and what might be below or behind it at even smaller scales; what he apparently can't realize is that Bell's argument applies to objects in the macroscopic world, or supposed macroscopic world - actual detector clicks and the clicks which the detectors would have made if they had been aligned differently.

That is what I really don't "get" about t'Hooft's position. That there are essentially an infinite number of possible macroscopic "decision machines" that could be used to select detector alignment, and all of them must be "in" on the conspiracy.

For example, my aunt Miriam could make the decisions for one of the detectors, while the other is controlled by a computer which gets apparently random seeds from a geiger counter near a radioactive sample. And yet he is saying these are not only predetermined, but acting in a coordinated manner.

Now if you knew my aunt Miriam, you would know how ridiculous this actually sounds. :smile: At any rate, it certainly implies an internal physical structure far beyond anything previously discovered. I would estimate that every particle must have some kind of local superdeterministic DNA to account for my Aunt Miriam and the radioactive sample. As well as for any other pairs of macroscopic selection devices, of which there would be many.
 
  • #54
lugita15 said:
No, that's not what I meant, but that's also an important assumption, known as the "no-conspiracy condition".
[..]
Anyway, what I was talking about was when Hebert says this "Starting with two completely identical binary messages, if A's 30 degree turn introduces a 25% mismatch and B's 30 degree turn introduces a 25% mismatch, then the total mismatch (when both are turned) can be at most 50%."He's assuming whenever you get a mismatch between the -30 degree polarizer and the 30 degree polarizer, this really represents a deviation of one of the polarizer measurements from the "identical binary messages" that would have been gotten if you had put both polarizers at 0 degrees.
That's the subtle detail that I disagree with: he speaks not about "would have been gotten" but about what "are observed". I think that that is a stronger argument. :smile:
Without the assumption that there is counterfactual definiteness at 0 degrees, you can't conclude that the percentage (i.e. the probability) of mismatches at -30 and 30 is less than or equal to the percentage of mismatches at -30 and 0 plus the percentage of mismatches at 0 and 30. The "successful" local hidden variable models you're talking about, like the ones zonde was referring to, do not actually reproduce all the experimental predictions of quantum mechanics. [..]
Herbert's proof asserts something slightly different from Bell's theorem, as I emphasised earlier: his claim isn't about theory but about facts of nature. What I called "successful" is to reproduce those measurement facts (real ones as opposed to imagined ones) with a "local realistic" model of which Herbert's proof asserts that they cannot be possibly reproduced by such a model.

It reminds me a bit of Ehrenfest's perfectly stiff disk: according to SR it cannot be made to rotate, but it has not been possible to disprove that aspect of SR - simply because SR contains the "loophole" that such a disk cannot be made. :wink:
 
Last edited:
  • #55
I don't see any difference between the theorem Herbert is proving and the one Bell is proving. Especially in the light of Arthur Fine's (1982) theorem showing the equivalence of the CHSH inequalities and the existence of a joint probability distribution of the outcomes of all the different measurements on the two particules.
 
  • #56
harrylin said:
That's the subtle detail that I disagree with: he speaks not about "would have been gotten" but about what "are observed". I think that that is a stronger argument. :smile:
But if the detectors are oriented at -30 degrees and 30 degrees, speaking about 0 degrees is clearly counterfactual reasoning. Herbert refers to the "binary message", the sequence of 0's and 1's you would have gotten if you oriented the detectors 0 degrees, and he considers mismatches between the -30 degree detector and the 30 degree detector to arise from deviations from this initial binary sequence. That is how he is able to say that a mismatch between -30 and 30 requires a mismatch between -30 and 0 or a mismatch between 0 and 30, and thus the percentage of mismatches between -30 and 30 is less than or equal to the percentage of mismatches between -30 and 0 plus the percentage of mismatches between 0 and 30.
Herbert's proof asserts something slightly different from Bell's theorem, as I emphasised earlier: his claim isn't about theory but about facts of nature.
But the "facts of nature" that Herbert discusses have not been entirely confirmed by experiments in a way that skeptics cannot dispute. If you ask zonde, he will insist vehemently that current experiments do not allow you to definitively test the claim of quantum mechanics that entangled photons exhibit identical behavior at identical angles, due to various loopholes like fair sampling and detector efficiency that currently practical Bell tests fall victim to. But what Herbert is showing, and I think Bell was showing the same things, is that if we accept that quantum mechanics is completely right about all its experimental predictions, like identical behavior at identical angles, then no local hidden variable theory will be able to account all of these facts of nature.
What I called "successful" is to reproduce those measurement facts (real ones as opposed to imagined ones) with a "local realistic" model of which Herbert's proof asserts that they cannot be possibly reproduced by such a model.
But in Herbert's proof, we are talking about "imagined" measurement facts, at least for now, because the experiment he discusses is an ideal Bell test free from experimental loopholes, and we haven't done such a perfect experiment yet (although we're getting there...). But you're right, if the empirical facts of nature are as Herbert (and quantum mechanics) say they are, then the thesis that reality is local can be deemed rejected.
 
  • #57
gill1109 said:
Nobel laureate Gerard t'Hooft (who I have talked to about this a number of times) is a superdeterminist when we are talking about the quantum world and what might be below or behind it at even smaller scales; what he apparently can't realize is that Bell's argument applies to objects in the macroscopic world, or supposed macroscopic world - actual detector clicks and the clicks which the detectors would have made if they had been aligned differently.
How did t'Hooft respond when you brought up this point to him?
 
  • #58
't Hooft didn't understand the point. Nor when other colleagues tried to explain it to him.

About Herbert's proof: Bell's theorem is about counterfactual outcomes of not performed measurements. Herbert is careless in his language (or is not being explicit enough). By definition, no experiment can ever prove the theorem.

Experiments can merely confirm the predictions of QM. Good experiments do that in situations which rule out local realist explanations via e.g. exploitation of the detection loophole, or through the setting in one wing of the experiment being in principle available in the other wing before conclusion of the measurement. Good experiments incorporate as physical constraints, the assumptions which are made in the proof.

E.g. The outcomes are +1 or -1; not +1 or -1 or "no show". The function A *can't* depend on b because the value of b can't be available...
 
  • #59
lugita15 said:
But if the detectors are oriented at -30 degrees and 30 degrees, speaking about 0 degrees is clearly counterfactual reasoning. Herbert refers to the "binary message", the sequence of 0's and 1's you would have gotten if you oriented the detectors 0 degrees, [..]
There you go again! And again I must reply: no, he refers to the sequence that he claims that you obtain each time when you orient the detectors 0 degrees. That is not about a conditional, hypothetical experience of a non-observed photon, but a factual experience of observed events. I found that really nice.
and he considers mismatches between the -30 degree detector and the 30 degree detector to arise from deviations from this initial binary sequence. That is how he is able to say that a mismatch between -30 and 30 requires a mismatch between -30 and 0 or a mismatch between 0 and 30, and thus the percentage of mismatches between -30 and 30 is less than or equal to the percentage of mismatches between -30 and 0 plus the percentage of mismatches between 0 and 30. But the "facts of nature" that Herbert discusses have not been entirely confirmed by experiments in a way that skeptics cannot dispute. If you ask zonde, he will insist vehemently that current experiments do not allow you to definitively test the claim of quantum mechanics that entangled photons exhibit identical behavior at identical angles, due to various loopholes like fair sampling and detector efficiency that currently practical Bell tests fall victim to.
What mattered to me was that Herbert made a seemingly rock solid claim about Nature and possible models of Nature that has been falsified - and I was frustrated because I did not find the error. Zonde was so kind to point the error out to me.
But what Herbert is showing, and I think Bell was showing the same things, is that if we accept that quantum mechanics is completely right about all its experimental predictions, like identical behavior at identical angles, then no local hidden variable theory will be able to account all of these facts of nature.
From reading up on this topic I discovered that there is some fuzziness about what exactly QM predicts for some real measurements; but the models that I heard about accurately reproduce what is measured in a typical Herbert set-up. Following your logic, we should conclude that QM is wrong. However, I think that that is not necessarily the case.
But in Herbert's proof, we are talking about "imagined" measurement facts, at least for now, because the experiment he discusses is an ideal Bell test free from experimental loopholes, and we haven't done such a perfect experiment yet (although we're getting there...). But you're right, if the empirical facts of nature are as Herbert (and quantum mechanics) say they are, then the thesis that reality is local can be deemed rejected.
Well Herbert fooled me there - and apparently he was fooled himself. :rolleyes:
 
Last edited:
  • #60
harrylin said:
From reading up on this topic I discovered that there is some fuzziness about what exactly QM predicts for some real measurements...

I am not aware of any controversy with regards to the predictions of QM in any particular setup. Every experimental paper (at least those I have seen) carefully compares the QM predictions to actual results, usually in the form of a graph and an accompanying table. These are peer-reviewed.
 
  • #61
harrylin said:
There you go again! And again I must reply: no, he refers to the sequence that he claims that you obtain each time when you orient the detectors 0 degrees. That is not about a conditional, hypothetical experience of a non-observed photon, but a factual experience of observed events. I found that really nice.
Let me try again. This is the crucial step where counterfactual definiteness is invoked: "Starting with two completely identical binary messages, if A's 30 degree turn introduces a 25% mismatch and B's 30 degree turn introduces a 25% mismatch." He is very clear, A's 30 degreee turn introduces a 25% mismatch from the 0 degree binary message. He is saying this even in the case when B has also turned his detector, so that no one is actually measuring this particular bit of the 0 degree binary message. The only bits of the 0 degree binary message that are actually observed are the ones for which one of the detectors was turned to 0 degrees. And yet he is asserting that even when neither of the detectors are pointed at 0 degrees, the mismatches between the two detectors still represent errors from the 0 degree binary message. Isn't discussion of deviation from unmeasured bits of a binary message a clear case of counterfactual definiteness, AKA realism?
What mattered to me was that Herbert made a seemingly rock solid claim about Nature and possible models of Nature that has been falsified
Perhaps Herbert should have phrased his claim slightly less boldly, because various practical loopholes make it hard to perfectly do the experiment he is talking about. But while it is true that experimental limitations prevent us at the current moment from absolutely definitively ruling out all local hidden variable models, we're getting there quickly, as I think zonde has said.
- and I was frustrated because I did not find the error. Zonde was so kind to point the error out to me.
Herbert is not making any "errors". The main point of the proof, even if Herbert didn't state it quite like this, is to to show that unless quantum mechanics is wrong about the experimental predictions it makes concerning entanglement, we can deem local hidden variable models to be ruled out.
From reading up on this topic I discovered that there is some fuzziness about what exactly QM predicts for some real measurements;
No, there isn't.
but the models that I heard about accurately reproduce what is measured in a typical Herbert set-up.
First of all, the term "Herbert set-up" is a bit cringe-inducing; as Herbert himself says, "It has appeared in some textbooks as "Herbert's Proof" where I would have preferred "Herbert's Version of Bell's Proof"". (And as I told you before, although Herbert apparently came up with it independently, the -30, 0, 30 example was the one used by Bell when he tried to explain his proof to popular audiences.)

But anyway, you're right that there are local hidden variable models that are not unequivocally ruled out by currently practical Bell tests. But that probably says more about current experimental limitations than it does about the success of those models.
Following your logic, we should conclude that QM is wrong.
No, we shouldn't. If a perfect, loophole-free Bell test, like the one Herbert envisions, gave results consistent with the possibility of a local hidden variable model, then yes there may be just cause to abandon QM. But until that time, how can you conclude such a thing from the logic?
Well Herbert fooled me there - and apparently he was fooled himself. :rolleyes:
No, I don't think so. The only point I'd concede is that he might want to qualify his remarks in all caps that "NO CONCEIVABLE LOCAL REALITY CAN UNDERLIE THE LOCAL QUANTUM FACTS." If he added "ASSUMING THAT THEY ARE INDEED FACTS, WHICH THEY SEEM TO BE", then it would be fine.
 
  • #62
DrChinese said:
I am not aware of any controversy with regards to the predictions of QM in any particular setup. Every experimental paper (at least those I have seen) carefully compares the QM predictions to actual results, usually in the form of a graph and an accompanying table. These are peer-reviewed.
I was thinking of for example Bell's idea about experiments that he seems to have thought that according to QM are possible, but that until now were not possible in reality; and for example Weih's experiment, which yields results of which the exact QM predictions are unclear for large time windows. Maybe we should start a discussion topic about that?
 
  • #63
lugita15 said:
Let me try again. This is the crucial step where counterfactual definiteness is invoked: "Starting with two completely identical binary messages, if A's 30 degree turn introduces a 25% mismatch and B's 30 degree turn introduces a 25% mismatch." He is very clear, A's 30 degreee turn introduces a 25% mismatch from the 0 degree binary message. He is saying this even in the case when B has also turned his detector, so that no one is actually measuring this particular bit of the 0 degree binary message. The only bits of the 0 degree binary message that are actually observed are the ones for which one of the detectors was turned to 0 degrees. And yet he is asserting that even when neither of the detectors are pointed at 0 degrees, the mismatches between the two detectors still represent errors from the 0 degree binary message. Isn't discussion of deviation from unmeasured bits of a binary message a clear case of counterfactual definiteness, AKA realism?
Sorry, I never understood such discussions and words - which is why I prefer Herbert's formulation, and even Bell's. And I already stated how I interpret that: we assume that the rotation of the detector doesn't affect the stream of whatever is coming towards the detector. If people call that "counterfactual definitness", that's fine to me. It's certainly what I call "local realism" aka "no spooky action at a distance".
[..]The only point I'd concede is that he might want to qualify his remarks in all caps that "NO CONCEIVABLE LOCAL REALITY CAN UNDERLIE THE LOCAL QUANTUM FACTS." If he added "ASSUMING THAT THEY ARE INDEED FACTS, WHICH THEY SEEM TO BE", then it would be fine.
Sure - my point was that he presented non-facts as facts, and I fell into that trap.
 
  • #64
harrylin said:
Please elaborate - you seem to suggest to have spotted another flaw in Herbert's proof, but it's not clear to me what you mean.
Not a flaw in Herbert's proof. But in his interpretation of the physical meaning of his proof.
 
Last edited:
  • #65
gill1109 said:
Nobel laureate Gerard t'Hooft (who I have talked to about this a number of times) is a superdeterminist when we are talking about the quantum world and what might be below or behind it at even smaller scales; what he apparently can't realize is that Bell's argument applies to objects in the macroscopic world, or supposed macroscopic world - actual detector clicks and the clicks which the detectors would have made if they had been aligned differently.
Of course. And that's all that Bell's theorem applies to. As billschneider has said repeatedly.

What Bell's theorem doesn't apply to, as far as anybody can ascertain, is whatever is happening in the reality underlying instrumental behavior. So, it doesn't inform wrt whether nature is local or nonlocal in that underlying reality.

t'Hooft is a superdeterminist? Interesting. I would have thought him to have a better approach to the interpretation of Bell's theorem than that.
 
  • #66
ThomasT said:
Not a flaw in Herbert's proof. But in his interpretation of the physical meaning of his proof.
His proof is a proof (or so he claims) about the physical meaning of observations.
 
  • #67
harrylin said:
OK thanks for the clarification - that looks very different! :-p

So one then has in reality for example:
- step 1: 90% mismatch
- step 2: 92.5% mismatch
- step 3: 92.5% mismatch
- step 4: 97.5% mismatch.

Based on local reality and applying Herbert's approach, I find that in case of a mismatch of 90% at step 1, the mismatch of step 4 should be <= 185%. Of course, that means <=100%.
Note: in a subsequent thread on why hidden variables imply a linear relationship, it became clear that imperfect detection is not the only flaw in Herbert's claim about facts of measurement reality; another issue that Herbert missed is the effect of data picking such as with time coincidence windows.
- https://www.physicsforums.com/showthread.php?t=589923&page=6
 
  • #68
harrylin said:
Note: in a subsequent thread on why hidden variables imply a linear relationship, it became clear that imperfect detection is not the only flaw in Herbert's claim about facts of measurement reality; another issue that Herbert missed is the effect of data picking such as with time coincidence windows.
- https://www.physicsforums.com/showthread.php?t=589923&page=6
There are, of course, numerous experimental loopholes in current Bell tests. Herbert isn't concerned with loopholes. The point is that the local determinism is fundamentally in contradiction with the empirical predictions of QM. Whether this empirical disagreement is practically testable given current experimental limitations is, to me, beside the point.
 
  • #69
lugita15 said:
There are, of course, numerous experimental loopholes in current Bell tests. Herbert isn't concerned with loopholes. The point is that the local determinism is fundamentally in contradiction with the empirical predictions of QM. Whether this empirical disagreement is practically testable given current experimental limitations is, to me, beside the point.
Obviously we continue to disagree about what Herbert claimed to have proved; but everyone can read Herbert's claims and we have sufficiently discussed that.
 
  • #70
harrylin said:
Obviously we continue to disagree about what Herbert claimed to have proved; but everyone can read Herbert's claims and we have sufficiently discussed that.
I completely agree with you that Herbert worded his conclusion a bit too strongly, because he took for granted that QM is correct in its experimental predictions, an assumption that has overwhelming evidence backing it up, but not definitive proof due to various loopholes like detector efficiency.
 
  • #71
lugita15 said:
I completely agree with you that Herbert worded his conclusion a bit too strongly, because he took for granted that QM is correct in its experimental predictions, an assumption that has overwhelming evidence backing it up, but not definitive proof due to various loopholes like detector efficiency.
The more I become aware about the tricky details and the impressive experimental attempts to disprove "local realism", the more I am impressed by - to borrow some of your phrasing - the equally overwhelming survival of Einstein locality, but not definitive proof* due to various loopholes like detector efficiency and "noise". :wink:

*and of course, in science such a thing as "definitive proof" is anyway hardly possible!
 
Last edited:
  • #72
harrylin said:
The more I become aware about the tricky details and the impressive experimental attempts to disprove "local realism", the more I am impressed by - to borrow some of your phrasing - the equally overwhelming survival of Einstein locality, but not definitive proof due to various loopholes like detector efficiency and "noise". :wink:
But the thing is that pretty much every one of the loopholes have been closed seperately, we just haven't managed to close all of them simultaneously in one experiment. So the local determinist is left with having to come up with an explanation of the form "In THIS kind of experiment, the predictions of QM only appear to be right because of THIS loophole, but in THAT kind of experiment, the predictions of QM only appear to be right because of THAT loophole."
 
  • #73
lugita15 said:
But the thing is that pretty much every one of the loopholes have been closed seperately, we just haven't managed to close all of them simultaneously in one experiment. So the local determinist is left with having to come up with an explanation of the form "In THIS kind of experiment, the predictions of QM only appear to be right because of THIS loophole, but in THAT kind of experiment, the predictions of QM only appear to be right because of THAT loophole."
You seem to confuse "local realism" with "local determinism", but that's another topic. What we are concerned with is realistic measurements and different from what I think you suggest, I have not seen evidence of the necessity for more ad hoc "local" explanations of measurement results than for "non-local" explanations. And that's again a different topic than the one that is discussed here, so I'll leave it at that.
 
  • #74
harrylin said:
You seem to confuse "local realism" with "local determinism", but that's another topic.
Yes, sorry about that, I was using ThomasT's terminology. I meant what is normally called local realism.
What we are concerned with is realistic measurements and different from what I think you suggest, I have not seen evidence of the necessity for more ad hoc "local" explanations of measurement results than for "non-local" explanations. And that's again a different topic than the one that is discussed here, so I'll leave it at that.
To be specific, there are ion experiments that close the detection loophole but leave the communication loophole open, and there are photon experiments that close the communication loophole but leave the detection loophole open. So you have to say something like "Photons seem to obey QM when slower-than-light communication is ruled out only because the photon detectors are inefficient, but ions seem to obey QM even with perfectly efficient detection only because slower-than-light communication occurs." Doesn't that seem ad hoc to you?
 
  • #75
lugita15 said:
Yes, sorry about that, I was using ThomasT's terminology. I meant what is normally called local realism.
To be specific, there are ion experiments that close the detection loophole but leave the communication loophole open, and there are photon experiments that close the communication loophole but leave the detection loophole open. So you have to say something like "Photons seem to obey QM when slower-than-light communication is ruled out only because the photon detectors are inefficient, but ions seem to obey QM even with perfectly efficient detection only because slower-than-light communication occurs." Doesn't that seem ad hoc to you?
As I said it's a different topic than Nick Herbert's proof. Please start that topic with a new thread - thanks in advance!
 
  • #76
lugita15 said:
To be specific, there are ion experiments that close the detection loophole but leave the communication loophole open, and there are photon experiments that close the communication loophole but leave the detection loophole open. So you have to say something like "Photons seem to obey QM when slower-than-light communication is ruled out only because the photon detectors are inefficient, but ions seem to obey QM even with perfectly efficient detection only because slower-than-light communication occurs." Doesn't that seem ad hoc to you?
How (why) do (must) the ion experiments leave the communication loophole open? I don't know anything about these experiments, so I'm just asking. Why can't they close the communication loophole in the ion experiments ... seeing as how that loophole has been closed in other experiments?
 
  • #77
ThomasT said:
How (why) do (must) the ion experiments leave the communication loophole open? I don't know anything about these experiments, so I'm just asking. Why can't they close the communication loophole in the ion experiments ... seeing as how that loophole has been closed in other experiments?
There's no fundamental reason why it hasn't been closed, it's just that we have more experience doing Bell tests with photons, so people have developed good techniques for switching photon detector settings fast enough that the communication loophole is closed. If I were to guess, I would say that it's more likely that the detection loophole is closed for photon experiments sooner than the communication loophole is closed for ion experiments, if only because more people are going to work on improving photon detectors.
 
  • #78
harrylin said:
Note: in a subsequent thread on why hidden variables imply a linear relationship, it became clear that imperfect detection is not the only flaw in Herbert's claim about facts of measurement reality; another issue that Herbert missed is the effect of data picking such as with time coincidence windows.
- https://www.physicsforums.com/showthread.php?t=589923&page=6
Coincidence time loophole is just the same about imperfect matching of pairs. Say if instead of talking about "detection loophole" you would talk about violation of "fair sampling assumption" then it would cover coincidence time loophole just as well.

On the practical side for this coincidence time loophole you would predict some relevant detections outside coincidence time window. And that can be tested with Weihs et al data.
I got one dataset from Weihs experiment (some time ago there was one publicly available) loaded it in mysql database and then fooled around with different queries for quite some time. And I found that first - as you increase coincidence time window (beyond certain value of few ns) correlations diminish at the level that you would expect from random detections, second - detection times do not correlate beyond some small time interval. Deviations in that small time interval are explained as detector jitter.
Have to say that this detector jitter seemed rather very "unfair" in respect to different polarization settings (and that might be what Raedt saw in data). But I could not do any further tests without more datasets.
 
  • #79
zonde said:
Coincidence time loophole is just the same about imperfect matching of pairs. Say if instead of talking about "detection loophole" you would talk about violation of "fair sampling assumption" then it would cover coincidence time loophole just as well. On the practical side for this coincidence time loophole you would predict some relevant detections outside coincidence time window. And that can be tested with Weihs et al data.
I understand "detection loophole" to mean incomplete detection due to detector inefficiencies, and I find it very instructive to distinguish that from data picking. Those are very different things - indeed, Weih's data illustrate the importance of that distinction rather well, it was even what I had in mind.
I got one dataset from Weihs experiment (some time ago there was one publicly available) loaded it in mysql database and then fooled around with different queries for quite some time.
Could you make that dataset available to physicsforums? I think that it's very instructive for this group to have access to real data instead of fantasy data such as presented by Herbert.
And I found that first - as you increase coincidence time window (beyond certain value of few ns) correlations diminish at the level that you would expect from random detections, second - detection times do not correlate beyond some small time interval. Deviations in that small time interval are explained as detector jitter. Have to say that this detector jitter seemed rather very "unfair" in respect to different polarization settings [..]
That's the third explanation that I see (earlier ones that I saw were "noise" and "non-entangled pairs"); I guess it's really high time to start a topic about ad hoc explanations!
 
  • #80
Nick Herbert's co-called "proof" is a nice introductory pop-sci illustration of Bell's inequality. It is certainly not a proof of anything, let alone replacement of Bell's theorem. It does not have a clear statement of either assumptions or conclusions. And the key sentence of the "proof" does not stand up to scrutiny:
Starting with two completely identical binary messages, if A's 30 degree turn introduces a 25% mismatch and B's 30 degree turn introduces a 25% mismatch, then the total mismatch (when both are turned) can be at most 50%.
Now, where do these two sequences come from? And why would the difference be 25%? 25% is only on average, in theory it is possible (however unlikely) to get any value between 0% and 100%. And how can we possibly compare sequences at different angle settings when they keep changing every time? Frankly, I'm surprised billschnieder didn't rip it apart.

The answer of course is that one should not assume the same sequence when measuring different angle settings. Instead one must consider all possible sequences according to their probability distribution. Welcome λ, ρ(λ) and the rest of Bell's proof. As a result, the inequality only holds statistically, that is the probability of violating it goes towards zero with sequence length going to ∞, but never vanishes completely.
 
  • #81
Delta Kilo said:
Nick Herbert's co-called "proof" is a nice introductory pop-sci illustration of Bell's inequality. It is certainly not a proof of anything, let alone replacement of Bell's theorem. It does not have a clear statement of either assumptions or conclusions.
I agree that Herbert's argument is worded rather informally, but I think his reasoning is fundamentally sound. In my blog post here, I try to restate his logic a bit more precisely, but only a bit, because I think it's mostly fine as it is.
Delta Kilo said:
Now, where do these two sequences come from? And why would the difference be 25%? 25% is only on average, in theory it is possible (however unlikely) to get any value between 0% and 100%. And how can we possibly compare sequences at different angle settings when they keep changing every time? Frankly, I'm surprised billschnieder didn't rip it apart.
A 25% mismatch is just an informal way of saying that the probability of mismatch between corresponding bits of the two measured sequences is 25%. If that's not clear in Herbert's exposition, I hope I made that clear in my blog post.
Delta Kilo said:
The answer of course is that one should not assume the same sequence when measuring different angle settings.
Well, the notion that mismatches between the two sequences really represents deviation from a common 0-degree binary "message" is deduced from the fact that you get identical behavior at identical polarizer settings, and thus assuming counterfactual definiteness (and excluding superdeterminism) we can conclude that even when we don't turn the polarizers to the same angle, it is still true that we WOULD have gotten identical behavior if we HAD turned the polarizers to identical angles. And if you believe in locality, the only way this is possible is for the two photons in each photon pair to have agreed in advance exactly what angles to both go through and what angles not to go through. If, for a particular photon pair, they have agreed that they should go through at 0 degrees, that is represented as a 1; otherwise, it would a 0. This is the the 0-degree binary message, and mismatches when you turn the polarizer settings away from 0 degrees are supposed to indicate deviations from this.
Delta Kilo said:
As a result, the inequality only holds statistically, that is the probability of violating it goes towards zero with sequence length going to ∞, but never vanishes completely.
Of course Herbert's proof is about probabilities. The Bell inequality in his proof can be stated as, the probability of mismatch at 60 degrees is less than or equal to twice the probability of mismatch at 30 degree. Now of course, like any probability, you can't find the probability with perfect accuracy using only finitely many runs of the experiment. It's just like when you flip a fair coin a billion times, you're not guaranteed to get exactly half heads and half tails. You just have to try and extrapolate the observed probability in the limit as n goes to infinity. This is nothing special about Bell in particular.
 
  • #82
Delta Kilo said:
[..]
The answer of course is that one should not assume the same sequence when measuring different angle settings. Instead one must consider all possible sequences according to their probability distribution. Welcome λ, ρ(λ) and the rest of Bell's proof. As a result, the inequality only holds statistically, that is the probability of violating it goes towards zero with sequence length going to ∞, but never vanishes completely.
He doesn't assume the same sequence and neither does he need to use unknown variables or probability distributions of them - good riddance! - that's the part that I appreciate.
Instead he makes claims (overly simplified though) about the observed statistical correlations and what a non-spooky model could predict for such correlations. He even doesn't need probability analysis, only the most basic understanding of statistical behaviour.
 
  • #83
harrylin said:
He doesn't assume the same sequence and neither does he need to use unknown variables or probability distributions of them - good riddance! - that's the part that I appreciate.
I guess we have a difference of opinion on this point. But for the benefit of others, here was my response to you earlier this thread:
lugita15 said:
Let me try again. This is the crucial step where counterfactual definiteness is invoked: "Starting with two completely identical binary messages, if A's 30 degree turn introduces a 25% mismatch and B's 30 degree turn introduces a 25% mismatch." He is very clear, A's 30 degreee turn introduces a 25% mismatch from the 0 degree binary message. He is saying this even in the case when B has also turned his detector, so that no one is actually measuring this particular bit of the 0 degree binary message. The only bits of the 0 degree binary message that are actually observed are the ones for which one of the detectors was turned to 0 degrees. And yet he is asserting that even when neither of the detectors are pointed at 0 degrees, the mismatches between the two detectors still represent errors from the 0 degree binary message. Isn't discussion of deviation from unmeasured bits of a binary message a clear case of counterfactual definiteness, AKA realism?
 
  • #84
lugita15 said:
Of course Herbert's proof is about probabilities.
If it is, he fails to mention that. Words such as 'probability', 'distribution', 'statistics', 'expectation' are conspicuously absent from his text.

What I'm saying, we know what he says is basically right, because it is backed by the machinery of Bell's mathematical proof. Without it, if taken literally, it is full of holes you can ride an elephant through. As such, it is susceptible to the factorization argument, you know the kind usually pushed by Bill here, to which the original Bell's proof is immune.

And since Herbert's 'proof' has neither assumptions nor conclusions, people can argue about the meaning of it until the Second Coming of the Great Prophet Zarquon
 
  • #85
lugita15 said:
I guess we have a difference of opinion on this point. But for the benefit of others, here was my response to you earlier this thread:
[..] "Starting with two completely identical binary messages, if A's 30 degree turn introduces a 25% mismatch and B's 30 degree turn introduces a 25% mismatch."
Although we surely disagreed about what Herbert did not write, I think that we fully agreed about what Herbert wrote - and this is also what I referred to in my reply to Delta Kilo. Perhaps DK understood "completely identical" to mean a fixed code that can be repeated, but obviously that isn't what is measured nor what Herbert meant.
 
  • #86
Delta Kilo said:
If it is, he fails to mention that. Words such as 'probability', 'distribution', 'statistics', 'expectation' are conspicuously absent from his text.

What I'm saying, we know what he says is basically right, because it is backed by the machinery of Bell's mathematical proof. Without it, if taken literally, it is full of holes you can ride an elephant through. As such, it is susceptible to the factorization argument, you know the kind usually pushed by Bill here, to which the original Bell's proof is immune.

And since Herbert's 'proof' has neither assumptions nor conclusions, people can argue about the meaning of it until the Second Coming of the Great Prophet Zarquon
Rather than waiting for Zarquon, why don't you look at the restatement of Herbert's proof in my blog post here, and tell me what flaws or gaps you see in that?
 
  • #87
Delta Kilo said:
If it is, he fails to mention that. Words such as 'probability', 'distribution', 'statistics', 'expectation' are conspicuously absent from his text.
Words like "distribution" and "statistics" are replaced in Herbert's text by the applied statistic observation terms such as "seemingly random 50/50 sequence of zeros and ones", "Match between the two sequences", etc.
[..] if taken literally, it is full of holes you can ride an elephant through. As such, it is susceptible to the factorization argument, you know the kind usually pushed by Bill here [..]
I'm afraid that apart of a misrepresentation of real data from optics experiments (thanks again Zonde - and also thanks gil, sorry that I couldn't follow your explanation!), I don't know any of those "holes" although I asked for them. Please elaborate!
And since Herbert's 'proof' has neither assumptions nor conclusions [..]
Sure it has, as discussed earlier in this thread.
 
Last edited:
  • #88
harrylin said:
Although we surely disagreed about what Herbert did not write, I think that we fully agreed about what Herbert wrote - and this is also what I referred to in my reply to Delta Kilo. Perhaps DK understood "completely identical" to mean a fixed code that can be repeated, but obviously that isn't what is measured nor what Herbert meant.
I assumed Delta Kilo was just referring to the bits of the 0 degree binary message, which are shared by the two particles (that is, the Nth bit of the 0 degree binary message is shared by the 2 photons in the Nth particle pair).
 
  • #89
harrylin said:
Words like "distribution" and "statistics" are replaced in Herbert's text by the applied statistic observation terms such as "seemingly random 50/50 sequence of zeros and ones", "Match between the two sequences", etc.
I'll leave it to linguists to figure out.

harrylin said:
I don't know any of those "holes" although I asked for them. Please elaborate!
One hole is direct reference in the 'proof' to the number of mismatches for different angles for the same coded sequence, which leaves it open to the argument that this is not what is measured in the actual experiment. And unlike Bell, in this particular case the argument is valid.

Another hole is failure to mention statistical character of the 'proof'. He does not say how long the sequence should be for the 'proof' to be valid. Clearly it does not work with sequence of length 1 and has a fair chance of failing with sequences of smallish number of bits.

harrylin said:
Sure it has, as discussed earlier in this thread.
As I'm sure it will be discussed, again and again. In the absence of rigorous formulation every word of it is subject to personal interpretation (and there is a lot of words), there is just no way to make convincing arguments about it. QM proof without any math in it. Great, just great.

Delta Kilo Over and Out.
 
  • #90
zonde said:
Deviations in that small time interval are explained as detector jitter.
Have to say that this detector jitter seemed rather very "unfair" in respect to different polarization settings (and that might be what Raedt saw in data). But I could not do any further tests without more datasets.
I wrote this from memory (I made analysis ~3 years ago) and have to correct that this jitter seems much more likely to come from electronics instead of detector. And calling it "unfair" might be not very correct as it can have local explanation.

harrylin said:
I understand "detection loophole" to mean incomplete detection due to detector inefficiencies, and I find it very instructive to distinguish that from data picking. Those are very different things - indeed, Weih's data illustrate the importance of that distinction rather well, it was even what I had in mind.

Could you make that dataset available to physicsforums? I think that it's very instructive for this group to have access to real data instead of fantasy data such as presented by Herbert.
I think you are missing the positive side of Herbert's fantasy data. Real experiments have a lot of different imperfections and it is really good to have some simple baseline that can help you sort important things from other things.

But you can PM me and I will send you the dataset.

harrylin said:
That's the third explanation that I see (earlier ones that I saw were "noise" and "non-entangled pairs"); I guess it's really high time to start a topic about ad hoc explanations!
Take a look at this paper:
A Close Look at the EPR Data of Weihs et al
It basically does analogous analysis that I have made.

And there is another one from the same author:
Explaining Counts from EPRB Experiments: Are They Consistent with Quantum Theory?

If you are interested in comparing that analysis with mine I have some excel file left from my analysis: see attachment
 

Attachments

  • #91
Delta Kilo said:
[..] One hole is direct reference in the 'proof' to the number of mismatches for different angles for the same coded sequence, which leaves it open to the argument that this is not what is measured in the actual experiment.
Except that, as we explained several times, Herbert of course does not imply the same coded sequence. But if you didn't mean that literally, perhaps you referred to the same issue as the imperfect detection mentioned by gil and zonde - and that's not just a hole but a pertinent error in Herbert's presentation.
Another hole is failure to mention statistical character of the 'proof'. He does not say how long the sequence should be for the 'proof' to be valid. Clearly it does not work with sequence of length 1 and has a fair chance of failing with sequences of smallish number of bits. As I'm sure it will be discussed, again and again.
I would not call that a hole as it is implicit in the argumentation; how could anyone who is educated not understand that for example "50/50 random result" refers to a statistical process? Thus I doubt that it will be discussed in this thread - at least, I would not spend more than a single reply if such a groundless objection would be raised.
In the absence of rigorous formulation every word of it is subject to personal interpretation (and there is a lot of words), there is just no way to make convincing arguments about it. QM proof without any math in it. Great, just great.
Delta Kilo Over and Out.
No problem, Over and Out!

Just for completeness: the math is certainly there, implicit in the words; it's inherent to physics that it is concerned with more than just math (for example Newton's Principia and Faraday's theory are mostly words and illustrations, with little pure math).
 
Last edited:
  • #92
zonde said:
[...]I think you are missing the positive side of Herbert's fantasy data. [..]
That may be true; I think that his fantasy data are very good as long as it would be stressed that it is not a good reflection of real data processing.
If you are interested in comparing that analysis with mine I have some excel file left from my analysis: see attachment
I would like to, but of course not in a thread on Herbert's proof! Thus, I will now start a thread on that topic.
 
  • #93
lugita15 said:
But the thing is that pretty much every one of the loopholes have been closed seperately, we just haven't managed to close all of them simultaneously in one experiment. So the local determinist is left with having to come up with an explanation of the form "In THIS kind of experiment, the predictions of QM only appear to be right because of THIS loophole, but in THAT kind of experiment, the predictions of QM only appear to be right because of THAT loophole."
Predictions of QM are just math without physical model. But the same math can apply to very different physical situations so the argument that physical situation should be the same because math is the same does not hold water IMHO.

And I think there are arguments why photon should be viewed as radically different from ion. Matter particles are the type that do "communicate" between them as they can form persistent structures. Photons on the other hand are agents of "communication" rather than nodes in structure.

And then there is some philosophical more handwaving type of justification for local realistic explanation. But that would be interesting only if you want to understand local realistic position rather than test it's strength.

lugita15 said:
There's no fundamental reason why it hasn't been closed, it's just that we have more experience doing Bell tests with photons, so people have developed good techniques for switching photon detector settings fast enough that the communication loophole is closed. If I were to guess, I would say that it's more likely that the detection loophole is closed for photon experiments sooner than the communication loophole is closed for ion experiments, if only because more people are going to work on improving photon detectors.
If you will decide to start new topic about loophole free experiments I can propose some interesting papers for discussion:
On ion side:
Bell inequality violation with two remote atomic qubits
An Elementary Quantum Network of Single Atoms in Optical Cavities
On photon side:
Conclusive quantum steering with superconducting transition edge sensors
 
  • #94
harrylin said:
I still don't get it... Herbert's proof doesn't even consider particles, let alone both particles or the same photon pairs.

Here is how I apply Herbert's proof to the scenario of incomplete detection, following his logic by the letter and adding my comments: ...
Step Two: Tilt the A detector till errors reach 25%. This occurs at a mutual misalignment of 30 degrees.

Let us combine this with the other assumptions about how Herberts SPOT works. According to Herberts description of his SPOT detector, detector 1 fires 0% of the time when tilted a 90°, 50% of the time when tilted at 45° and 100% of the time when tilted at 0°. Had he stopped there, it would appear to be linear. However, Herbert goes on to say that detector 1 fires 25% of the time when tilted at 30°. Clearly the functioning of the SPOT detector can not be linear with respect to angle. His own description of the functioning of the detector can not be explained by a linear function.

More later.
 
  • #95
billschnieder said:
Let us combine this with the other assumptions about how Herberts SPOT works. According to Herberts description of his SPOT detector, detector 1 fires 0% of the time when tilted a 90°, 50% of the time when tilted at 45° and 100% of the time when tilted at 0°. Had he stopped there, it would appear to be linear. However, Herbert goes on to say that detector 1 fires 25% of the time when tilted at 30°. Clearly the functioning of the SPOT detector can not be linear with respect to angle. His own description of the functioning of the detector can not be explained by a linear function.

More later.
At first sight, that issue doesn't matter for Herbert's proof. I copy back my overview here, with a little modification based on the later discussion. It seems to me that the bold part is valid no matter of the relationship is linear or not:

----------------------------------------------------------------------------
Step One: Start by aligning both SPOT detectors. No errors are observed.

[Note that, as we next discussed, this is perhaps the main flaw of Herbert's proof, as it implies 100% detection and zero mismatches. But it is interesting to verify "what if":]

[harrylin: for example the sequences go like this:

A 10010110100111010010
B 10010110100111010010]

Step Two: Tilt the A detector till errors reach 25%. This occurs at a mutual misalignment of 30 degrees.

[harrylin: for example (a bit idealized) the sequences go like this:

A 10010100110110110110
B 10110100111010010010

This mismatch could be partly due to the detection of different photon pairs.]

Step Three: Return A detector to its original position (100% match). Now tilt the B detector in the opposite direction till errors reach 25%. This occurs at a mutual misalignment of -30 degrees.

[harrylin: for example the sequences go like this, for the same reasons:

A 10100100101011010011
B 10010101101011010101]

Step Four: Return B detector to its original position (100% match). Now tilt detector A by +30 degrees and detector B by -30 degrees so that the combined angle between them is 60 degrees.

What is now the expected mismatch between the two binary code sequences?

[..] Assuming a local reality means that, for each A photon, whatever hidden mechanism determines the output of Miss A's SPOT detector, the operation of that mechanism cannot depend on the setting of Mr B's distant detector. In other words, in a local world, any changes that occur in Miss A's coded message when she rotates her SPOT detector are caused by her actions alone.
[STRIKE][harrylin: apparently that includes whatever mechanism one could imagine - also non-detection of part of the photons][/STRIKE]
And the same goes for Mr B. [..] So with this restriction in place (the assumption that reality is local), let's calculate the expected mismatch at 60 degrees.

Starting with two completely identical binary messages, if A's 30 degree turn introduces a 25% mismatch and B's 30 degree turn introduces a 25% mismatch, then the total mismatch (when both are turned) can be at most 50%.
In fact the mismatch should be less than 50% because if the two errors happen to occur on the same photon, a mismatch is converted to a match.

[harrylin: and if the errors happen to occur on different photons that are compared, still sometimes a mismatch will be converted to a match. Thus now for example the sequences go like this, for the same reasons as +30 degrees and -30 degrees:

A 10101010110101010011
B 10100100101011010101]
----------------------------------------------------------------------------

It looks to me that the only thing that one has to assume is that there is no conspiracy of photons based on how the detectors are relatively oriented - and even that is taken care of to be prevented in some tests. If you disagree, please detail how two 25% mismatches can, under the suggested ideal conditions, result in more than 50% total mismatch.

Also, you commented elsewhere:
billschnieder said:
[..] - The second issue which I have discussed [..] is that the inequality is derived for possibilities which can never be simultaneously realized (actualized). In principle it is impossible to test experimentally, so trying to take experimental results on the basis that probabilities are the same doesn't make sense. The probabilies may be the same but not simultaneously.
I think that that relates to the same reasonable looking assumption of non-conspiracy - we assume that the moon shines even when we don't look, because it shines whenever we look. Do you claim that the statistics on one side can be affected by what is done on the other side? That appears very "non-local" to me!
 
Last edited:
  • #96
billschnieder said:
Let us combine this with the other assumptions about how Herberts SPOT works. According to Herberts description of his SPOT detector, detector 1 fires 0% of the time when tilted a 90°, 50% of the time when tilted at 45° and 100% of the time when tilted at 0°. Had he stopped there, it would appear to be linear. However, Herbert goes on to say that detector 1 fires 25% of the time when tilted at 30°. Clearly the functioning of the SPOT detector can not be linear with respect to angle. His own description of the functioning of the detector can not be explained by a linear function.
First of all, it should be called sublinearity rather than linearity because the form of the Bell inequality is something plus something is AT MOST something, not something plus something equals something. Second of all, the sublinearity is not an assumption, it is a conclusion of a careful argument. So you can't say that the sublinearity is contrary to experimental results, therefore the argument is invalid. The argument is after all a proof by contradiction. It assumes that local causality underlies quantum mechanical phenomena, uses this assumption to arrive at a conclusion that the mismatches must be sublinear, and then notes that the sublinearity runs contrary to the experimental predictions of QM.
 
  • #97
I'm now looking into simulations that try to imitate the outcomes of experiments such as the one described here, and I found that some seemingly unimportant differences in experimental settings can be of great importance.
Does anyone know of an experiment that exactly reproduced the set-up of Herbert's proof?
That protocol uses 0, +30° and -30° in a special way that is essential for the proof.

As a reminder:
Step One: Start by aligning both SPOT detectors. No errors are observed.
Step Two: Tilt the A detector till errors reach 25%. This occurs at a mutual misalignment of 30 degrees.
Step Three: Return A detector to its original position (100% match). Now tilt the B detector in the opposite direction till errors reach 25%. This occurs at a mutual misalignment of -30 degrees.
Step Four: Return B detector to its original position (100% match). Now tilt detector A by +30 degrees and detector B by -30 degrees so that the combined angle between them is 60 degrees.
 
  • #98
harrylin said:
That protocol uses 0, +30° and -30° in a special way that is essential for the proof.
No, those particular angles aren't essential for the proof at all. We can take any angles a, b, and c. Let R(θ1,θ2) be the error rate when the polarizers are oriented at θ1 and θ2. Then Herbert's proof shows that R(a,c) ≤ R(a,b) + R(b,c).

Choosing equally spaced angles just makes things simple.
 
  • #99
harrylin said:
I'm now looking into simulations that try to imitate the outcomes of experiments such as the one described here
Are these simulations that resorts to loopholes?
 
  • #100
lugita15 said:
No, those particular angles aren't essential for the proof at all. We can take any angles a, b, and c. Let R(θ1,θ2) be the error rate when the polarizers are oriented at θ1 and θ2. Then Herbert's proof shows that R(a,c) ≤ R(a,b) + R(b,c).

Choosing equally spaced angles just makes things simple.
I discovered that more than elsewhere, the devil is in the details. Indeed, it doesn't have to be +30 degrees and -30 degrees; it think that +22 and -30 degrees is just as tough for "local reality"; his argument is not affected by that, I think. However, many experiments used protocols that don't match Herbert's proof. Any one?
lugita15 said:
Are these simulations that resorts to loopholes?
Any simulation that manages to reproduce real observations will do so by employing means to do so - and I would not know which means would not be called "loopholes" by some. I'm interested to verify such simulations with experiments that have actually been performed; but regretfully, many experiments have built-in loopholes by design. Herbert's design of experiment contains perhaps the least pre-baked loopholes, and that makes it challenging. Thus, once more: has his experiment actually been done, as he suggests?

PS: please don't present excuses why such an experiment has (maybe) not been performed; only answer if you can fulfill my request and give such matching data.
 
Last edited:
Back
Top