Nick Herbert's Proof: Quantum Non-Locality Explained

  • Thread starter Thread starter harrylin
  • Start date Start date
  • Tags Tags
    Proof
Click For Summary
Nick Herbert's proof of quantum non-locality suggests that assuming local reality leads to a prediction of less than 50% code mismatch at 60 degrees, which appears convincing. The discussion raises questions about potential issues with this proof and the existence of models that could reproduce quantum mechanics' characteristics with greater mismatch. It highlights that many published models exploit the detection loophole, which may not align with Herbert's conclusions. The conversation also emphasizes the distinction between stochastic independence and genuine physical independence in the context of locality, questioning the assumptions underlying Bell-type proofs. Overall, the debate continues on the implications of non-locality and the realism requirement in quantum mechanics.
  • #61
harrylin said:
There you go again! And again I must reply: no, he refers to the sequence that he claims that you obtain each time when you orient the detectors 0 degrees. That is not about a conditional, hypothetical experience of a non-observed photon, but a factual experience of observed events. I found that really nice.
Let me try again. This is the crucial step where counterfactual definiteness is invoked: "Starting with two completely identical binary messages, if A's 30 degree turn introduces a 25% mismatch and B's 30 degree turn introduces a 25% mismatch." He is very clear, A's 30 degreee turn introduces a 25% mismatch from the 0 degree binary message. He is saying this even in the case when B has also turned his detector, so that no one is actually measuring this particular bit of the 0 degree binary message. The only bits of the 0 degree binary message that are actually observed are the ones for which one of the detectors was turned to 0 degrees. And yet he is asserting that even when neither of the detectors are pointed at 0 degrees, the mismatches between the two detectors still represent errors from the 0 degree binary message. Isn't discussion of deviation from unmeasured bits of a binary message a clear case of counterfactual definiteness, AKA realism?
What mattered to me was that Herbert made a seemingly rock solid claim about Nature and possible models of Nature that has been falsified
Perhaps Herbert should have phrased his claim slightly less boldly, because various practical loopholes make it hard to perfectly do the experiment he is talking about. But while it is true that experimental limitations prevent us at the current moment from absolutely definitively ruling out all local hidden variable models, we're getting there quickly, as I think zonde has said.
- and I was frustrated because I did not find the error. Zonde was so kind to point the error out to me.
Herbert is not making any "errors". The main point of the proof, even if Herbert didn't state it quite like this, is to to show that unless quantum mechanics is wrong about the experimental predictions it makes concerning entanglement, we can deem local hidden variable models to be ruled out.
From reading up on this topic I discovered that there is some fuzziness about what exactly QM predicts for some real measurements;
No, there isn't.
but the models that I heard about accurately reproduce what is measured in a typical Herbert set-up.
First of all, the term "Herbert set-up" is a bit cringe-inducing; as Herbert himself says, "It has appeared in some textbooks as "Herbert's Proof" where I would have preferred "Herbert's Version of Bell's Proof"". (And as I told you before, although Herbert apparently came up with it independently, the -30, 0, 30 example was the one used by Bell when he tried to explain his proof to popular audiences.)

But anyway, you're right that there are local hidden variable models that are not unequivocally ruled out by currently practical Bell tests. But that probably says more about current experimental limitations than it does about the success of those models.
Following your logic, we should conclude that QM is wrong.
No, we shouldn't. If a perfect, loophole-free Bell test, like the one Herbert envisions, gave results consistent with the possibility of a local hidden variable model, then yes there may be just cause to abandon QM. But until that time, how can you conclude such a thing from the logic?
Well Herbert fooled me there - and apparently he was fooled himself. :rolleyes:
No, I don't think so. The only point I'd concede is that he might want to qualify his remarks in all caps that "NO CONCEIVABLE LOCAL REALITY CAN UNDERLIE THE LOCAL QUANTUM FACTS." If he added "ASSUMING THAT THEY ARE INDEED FACTS, WHICH THEY SEEM TO BE", then it would be fine.
 
Physics news on Phys.org
  • #62
DrChinese said:
I am not aware of any controversy with regards to the predictions of QM in any particular setup. Every experimental paper (at least those I have seen) carefully compares the QM predictions to actual results, usually in the form of a graph and an accompanying table. These are peer-reviewed.
I was thinking of for example Bell's idea about experiments that he seems to have thought that according to QM are possible, but that until now were not possible in reality; and for example Weih's experiment, which yields results of which the exact QM predictions are unclear for large time windows. Maybe we should start a discussion topic about that?
 
  • #63
lugita15 said:
Let me try again. This is the crucial step where counterfactual definiteness is invoked: "Starting with two completely identical binary messages, if A's 30 degree turn introduces a 25% mismatch and B's 30 degree turn introduces a 25% mismatch." He is very clear, A's 30 degreee turn introduces a 25% mismatch from the 0 degree binary message. He is saying this even in the case when B has also turned his detector, so that no one is actually measuring this particular bit of the 0 degree binary message. The only bits of the 0 degree binary message that are actually observed are the ones for which one of the detectors was turned to 0 degrees. And yet he is asserting that even when neither of the detectors are pointed at 0 degrees, the mismatches between the two detectors still represent errors from the 0 degree binary message. Isn't discussion of deviation from unmeasured bits of a binary message a clear case of counterfactual definiteness, AKA realism?
Sorry, I never understood such discussions and words - which is why I prefer Herbert's formulation, and even Bell's. And I already stated how I interpret that: we assume that the rotation of the detector doesn't affect the stream of whatever is coming towards the detector. If people call that "counterfactual definitness", that's fine to me. It's certainly what I call "local realism" aka "no spooky action at a distance".
[..]The only point I'd concede is that he might want to qualify his remarks in all caps that "NO CONCEIVABLE LOCAL REALITY CAN UNDERLIE THE LOCAL QUANTUM FACTS." If he added "ASSUMING THAT THEY ARE INDEED FACTS, WHICH THEY SEEM TO BE", then it would be fine.
Sure - my point was that he presented non-facts as facts, and I fell into that trap.
 
  • #64
harrylin said:
Please elaborate - you seem to suggest to have spotted another flaw in Herbert's proof, but it's not clear to me what you mean.
Not a flaw in Herbert's proof. But in his interpretation of the physical meaning of his proof.
 
Last edited:
  • #65
gill1109 said:
Nobel laureate Gerard t'Hooft (who I have talked to about this a number of times) is a superdeterminist when we are talking about the quantum world and what might be below or behind it at even smaller scales; what he apparently can't realize is that Bell's argument applies to objects in the macroscopic world, or supposed macroscopic world - actual detector clicks and the clicks which the detectors would have made if they had been aligned differently.
Of course. And that's all that Bell's theorem applies to. As billschneider has said repeatedly.

What Bell's theorem doesn't apply to, as far as anybody can ascertain, is whatever is happening in the reality underlying instrumental behavior. So, it doesn't inform wrt whether nature is local or nonlocal in that underlying reality.

t'Hooft is a superdeterminist? Interesting. I would have thought him to have a better approach to the interpretation of Bell's theorem than that.
 
  • #66
ThomasT said:
Not a flaw in Herbert's proof. But in his interpretation of the physical meaning of his proof.
His proof is a proof (or so he claims) about the physical meaning of observations.
 
  • #67
harrylin said:
OK thanks for the clarification - that looks very different! :-p

So one then has in reality for example:
- step 1: 90% mismatch
- step 2: 92.5% mismatch
- step 3: 92.5% mismatch
- step 4: 97.5% mismatch.

Based on local reality and applying Herbert's approach, I find that in case of a mismatch of 90% at step 1, the mismatch of step 4 should be <= 185%. Of course, that means <=100%.
Note: in a subsequent thread on why hidden variables imply a linear relationship, it became clear that imperfect detection is not the only flaw in Herbert's claim about facts of measurement reality; another issue that Herbert missed is the effect of data picking such as with time coincidence windows.
- https://www.physicsforums.com/showthread.php?t=589923&page=6
 
  • #68
harrylin said:
Note: in a subsequent thread on why hidden variables imply a linear relationship, it became clear that imperfect detection is not the only flaw in Herbert's claim about facts of measurement reality; another issue that Herbert missed is the effect of data picking such as with time coincidence windows.
- https://www.physicsforums.com/showthread.php?t=589923&page=6
There are, of course, numerous experimental loopholes in current Bell tests. Herbert isn't concerned with loopholes. The point is that the local determinism is fundamentally in contradiction with the empirical predictions of QM. Whether this empirical disagreement is practically testable given current experimental limitations is, to me, beside the point.
 
  • #69
lugita15 said:
There are, of course, numerous experimental loopholes in current Bell tests. Herbert isn't concerned with loopholes. The point is that the local determinism is fundamentally in contradiction with the empirical predictions of QM. Whether this empirical disagreement is practically testable given current experimental limitations is, to me, beside the point.
Obviously we continue to disagree about what Herbert claimed to have proved; but everyone can read Herbert's claims and we have sufficiently discussed that.
 
  • #70
harrylin said:
Obviously we continue to disagree about what Herbert claimed to have proved; but everyone can read Herbert's claims and we have sufficiently discussed that.
I completely agree with you that Herbert worded his conclusion a bit too strongly, because he took for granted that QM is correct in its experimental predictions, an assumption that has overwhelming evidence backing it up, but not definitive proof due to various loopholes like detector efficiency.
 
  • #71
lugita15 said:
I completely agree with you that Herbert worded his conclusion a bit too strongly, because he took for granted that QM is correct in its experimental predictions, an assumption that has overwhelming evidence backing it up, but not definitive proof due to various loopholes like detector efficiency.
The more I become aware about the tricky details and the impressive experimental attempts to disprove "local realism", the more I am impressed by - to borrow some of your phrasing - the equally overwhelming survival of Einstein locality, but not definitive proof* due to various loopholes like detector efficiency and "noise". :wink:

*and of course, in science such a thing as "definitive proof" is anyway hardly possible!
 
Last edited:
  • #72
harrylin said:
The more I become aware about the tricky details and the impressive experimental attempts to disprove "local realism", the more I am impressed by - to borrow some of your phrasing - the equally overwhelming survival of Einstein locality, but not definitive proof due to various loopholes like detector efficiency and "noise". :wink:
But the thing is that pretty much every one of the loopholes have been closed seperately, we just haven't managed to close all of them simultaneously in one experiment. So the local determinist is left with having to come up with an explanation of the form "In THIS kind of experiment, the predictions of QM only appear to be right because of THIS loophole, but in THAT kind of experiment, the predictions of QM only appear to be right because of THAT loophole."
 
  • #73
lugita15 said:
But the thing is that pretty much every one of the loopholes have been closed seperately, we just haven't managed to close all of them simultaneously in one experiment. So the local determinist is left with having to come up with an explanation of the form "In THIS kind of experiment, the predictions of QM only appear to be right because of THIS loophole, but in THAT kind of experiment, the predictions of QM only appear to be right because of THAT loophole."
You seem to confuse "local realism" with "local determinism", but that's another topic. What we are concerned with is realistic measurements and different from what I think you suggest, I have not seen evidence of the necessity for more ad hoc "local" explanations of measurement results than for "non-local" explanations. And that's again a different topic than the one that is discussed here, so I'll leave it at that.
 
  • #74
harrylin said:
You seem to confuse "local realism" with "local determinism", but that's another topic.
Yes, sorry about that, I was using ThomasT's terminology. I meant what is normally called local realism.
What we are concerned with is realistic measurements and different from what I think you suggest, I have not seen evidence of the necessity for more ad hoc "local" explanations of measurement results than for "non-local" explanations. And that's again a different topic than the one that is discussed here, so I'll leave it at that.
To be specific, there are ion experiments that close the detection loophole but leave the communication loophole open, and there are photon experiments that close the communication loophole but leave the detection loophole open. So you have to say something like "Photons seem to obey QM when slower-than-light communication is ruled out only because the photon detectors are inefficient, but ions seem to obey QM even with perfectly efficient detection only because slower-than-light communication occurs." Doesn't that seem ad hoc to you?
 
  • #75
lugita15 said:
Yes, sorry about that, I was using ThomasT's terminology. I meant what is normally called local realism.
To be specific, there are ion experiments that close the detection loophole but leave the communication loophole open, and there are photon experiments that close the communication loophole but leave the detection loophole open. So you have to say something like "Photons seem to obey QM when slower-than-light communication is ruled out only because the photon detectors are inefficient, but ions seem to obey QM even with perfectly efficient detection only because slower-than-light communication occurs." Doesn't that seem ad hoc to you?
As I said it's a different topic than Nick Herbert's proof. Please start that topic with a new thread - thanks in advance!
 
  • #76
lugita15 said:
To be specific, there are ion experiments that close the detection loophole but leave the communication loophole open, and there are photon experiments that close the communication loophole but leave the detection loophole open. So you have to say something like "Photons seem to obey QM when slower-than-light communication is ruled out only because the photon detectors are inefficient, but ions seem to obey QM even with perfectly efficient detection only because slower-than-light communication occurs." Doesn't that seem ad hoc to you?
How (why) do (must) the ion experiments leave the communication loophole open? I don't know anything about these experiments, so I'm just asking. Why can't they close the communication loophole in the ion experiments ... seeing as how that loophole has been closed in other experiments?
 
  • #77
ThomasT said:
How (why) do (must) the ion experiments leave the communication loophole open? I don't know anything about these experiments, so I'm just asking. Why can't they close the communication loophole in the ion experiments ... seeing as how that loophole has been closed in other experiments?
There's no fundamental reason why it hasn't been closed, it's just that we have more experience doing Bell tests with photons, so people have developed good techniques for switching photon detector settings fast enough that the communication loophole is closed. If I were to guess, I would say that it's more likely that the detection loophole is closed for photon experiments sooner than the communication loophole is closed for ion experiments, if only because more people are going to work on improving photon detectors.
 
  • #78
harrylin said:
Note: in a subsequent thread on why hidden variables imply a linear relationship, it became clear that imperfect detection is not the only flaw in Herbert's claim about facts of measurement reality; another issue that Herbert missed is the effect of data picking such as with time coincidence windows.
- https://www.physicsforums.com/showthread.php?t=589923&page=6
Coincidence time loophole is just the same about imperfect matching of pairs. Say if instead of talking about "detection loophole" you would talk about violation of "fair sampling assumption" then it would cover coincidence time loophole just as well.

On the practical side for this coincidence time loophole you would predict some relevant detections outside coincidence time window. And that can be tested with Weihs et al data.
I got one dataset from Weihs experiment (some time ago there was one publicly available) loaded it in mysql database and then fooled around with different queries for quite some time. And I found that first - as you increase coincidence time window (beyond certain value of few ns) correlations diminish at the level that you would expect from random detections, second - detection times do not correlate beyond some small time interval. Deviations in that small time interval are explained as detector jitter.
Have to say that this detector jitter seemed rather very "unfair" in respect to different polarization settings (and that might be what Raedt saw in data). But I could not do any further tests without more datasets.
 
  • #79
zonde said:
Coincidence time loophole is just the same about imperfect matching of pairs. Say if instead of talking about "detection loophole" you would talk about violation of "fair sampling assumption" then it would cover coincidence time loophole just as well. On the practical side for this coincidence time loophole you would predict some relevant detections outside coincidence time window. And that can be tested with Weihs et al data.
I understand "detection loophole" to mean incomplete detection due to detector inefficiencies, and I find it very instructive to distinguish that from data picking. Those are very different things - indeed, Weih's data illustrate the importance of that distinction rather well, it was even what I had in mind.
I got one dataset from Weihs experiment (some time ago there was one publicly available) loaded it in mysql database and then fooled around with different queries for quite some time.
Could you make that dataset available to physicsforums? I think that it's very instructive for this group to have access to real data instead of fantasy data such as presented by Herbert.
And I found that first - as you increase coincidence time window (beyond certain value of few ns) correlations diminish at the level that you would expect from random detections, second - detection times do not correlate beyond some small time interval. Deviations in that small time interval are explained as detector jitter. Have to say that this detector jitter seemed rather very "unfair" in respect to different polarization settings [..]
That's the third explanation that I see (earlier ones that I saw were "noise" and "non-entangled pairs"); I guess it's really high time to start a topic about ad hoc explanations!
 
  • #80
Nick Herbert's co-called "proof" is a nice introductory pop-sci illustration of Bell's inequality. It is certainly not a proof of anything, let alone replacement of Bell's theorem. It does not have a clear statement of either assumptions or conclusions. And the key sentence of the "proof" does not stand up to scrutiny:
Starting with two completely identical binary messages, if A's 30 degree turn introduces a 25% mismatch and B's 30 degree turn introduces a 25% mismatch, then the total mismatch (when both are turned) can be at most 50%.
Now, where do these two sequences come from? And why would the difference be 25%? 25% is only on average, in theory it is possible (however unlikely) to get any value between 0% and 100%. And how can we possibly compare sequences at different angle settings when they keep changing every time? Frankly, I'm surprised billschnieder didn't rip it apart.

The answer of course is that one should not assume the same sequence when measuring different angle settings. Instead one must consider all possible sequences according to their probability distribution. Welcome λ, ρ(λ) and the rest of Bell's proof. As a result, the inequality only holds statistically, that is the probability of violating it goes towards zero with sequence length going to ∞, but never vanishes completely.
 
  • #81
Delta Kilo said:
Nick Herbert's co-called "proof" is a nice introductory pop-sci illustration of Bell's inequality. It is certainly not a proof of anything, let alone replacement of Bell's theorem. It does not have a clear statement of either assumptions or conclusions.
I agree that Herbert's argument is worded rather informally, but I think his reasoning is fundamentally sound. In my blog post here, I try to restate his logic a bit more precisely, but only a bit, because I think it's mostly fine as it is.
Delta Kilo said:
Now, where do these two sequences come from? And why would the difference be 25%? 25% is only on average, in theory it is possible (however unlikely) to get any value between 0% and 100%. And how can we possibly compare sequences at different angle settings when they keep changing every time? Frankly, I'm surprised billschnieder didn't rip it apart.
A 25% mismatch is just an informal way of saying that the probability of mismatch between corresponding bits of the two measured sequences is 25%. If that's not clear in Herbert's exposition, I hope I made that clear in my blog post.
Delta Kilo said:
The answer of course is that one should not assume the same sequence when measuring different angle settings.
Well, the notion that mismatches between the two sequences really represents deviation from a common 0-degree binary "message" is deduced from the fact that you get identical behavior at identical polarizer settings, and thus assuming counterfactual definiteness (and excluding superdeterminism) we can conclude that even when we don't turn the polarizers to the same angle, it is still true that we WOULD have gotten identical behavior if we HAD turned the polarizers to identical angles. And if you believe in locality, the only way this is possible is for the two photons in each photon pair to have agreed in advance exactly what angles to both go through and what angles not to go through. If, for a particular photon pair, they have agreed that they should go through at 0 degrees, that is represented as a 1; otherwise, it would a 0. This is the the 0-degree binary message, and mismatches when you turn the polarizer settings away from 0 degrees are supposed to indicate deviations from this.
Delta Kilo said:
As a result, the inequality only holds statistically, that is the probability of violating it goes towards zero with sequence length going to ∞, but never vanishes completely.
Of course Herbert's proof is about probabilities. The Bell inequality in his proof can be stated as, the probability of mismatch at 60 degrees is less than or equal to twice the probability of mismatch at 30 degree. Now of course, like any probability, you can't find the probability with perfect accuracy using only finitely many runs of the experiment. It's just like when you flip a fair coin a billion times, you're not guaranteed to get exactly half heads and half tails. You just have to try and extrapolate the observed probability in the limit as n goes to infinity. This is nothing special about Bell in particular.
 
  • #82
Delta Kilo said:
[..]
The answer of course is that one should not assume the same sequence when measuring different angle settings. Instead one must consider all possible sequences according to their probability distribution. Welcome λ, ρ(λ) and the rest of Bell's proof. As a result, the inequality only holds statistically, that is the probability of violating it goes towards zero with sequence length going to ∞, but never vanishes completely.
He doesn't assume the same sequence and neither does he need to use unknown variables or probability distributions of them - good riddance! - that's the part that I appreciate.
Instead he makes claims (overly simplified though) about the observed statistical correlations and what a non-spooky model could predict for such correlations. He even doesn't need probability analysis, only the most basic understanding of statistical behaviour.
 
  • #83
harrylin said:
He doesn't assume the same sequence and neither does he need to use unknown variables or probability distributions of them - good riddance! - that's the part that I appreciate.
I guess we have a difference of opinion on this point. But for the benefit of others, here was my response to you earlier this thread:
lugita15 said:
Let me try again. This is the crucial step where counterfactual definiteness is invoked: "Starting with two completely identical binary messages, if A's 30 degree turn introduces a 25% mismatch and B's 30 degree turn introduces a 25% mismatch." He is very clear, A's 30 degreee turn introduces a 25% mismatch from the 0 degree binary message. He is saying this even in the case when B has also turned his detector, so that no one is actually measuring this particular bit of the 0 degree binary message. The only bits of the 0 degree binary message that are actually observed are the ones for which one of the detectors was turned to 0 degrees. And yet he is asserting that even when neither of the detectors are pointed at 0 degrees, the mismatches between the two detectors still represent errors from the 0 degree binary message. Isn't discussion of deviation from unmeasured bits of a binary message a clear case of counterfactual definiteness, AKA realism?
 
  • #84
lugita15 said:
Of course Herbert's proof is about probabilities.
If it is, he fails to mention that. Words such as 'probability', 'distribution', 'statistics', 'expectation' are conspicuously absent from his text.

What I'm saying, we know what he says is basically right, because it is backed by the machinery of Bell's mathematical proof. Without it, if taken literally, it is full of holes you can ride an elephant through. As such, it is susceptible to the factorization argument, you know the kind usually pushed by Bill here, to which the original Bell's proof is immune.

And since Herbert's 'proof' has neither assumptions nor conclusions, people can argue about the meaning of it until the Second Coming of the Great Prophet Zarquon
 
  • #85
lugita15 said:
I guess we have a difference of opinion on this point. But for the benefit of others, here was my response to you earlier this thread:
[..] "Starting with two completely identical binary messages, if A's 30 degree turn introduces a 25% mismatch and B's 30 degree turn introduces a 25% mismatch."
Although we surely disagreed about what Herbert did not write, I think that we fully agreed about what Herbert wrote - and this is also what I referred to in my reply to Delta Kilo. Perhaps DK understood "completely identical" to mean a fixed code that can be repeated, but obviously that isn't what is measured nor what Herbert meant.
 
  • #86
Delta Kilo said:
If it is, he fails to mention that. Words such as 'probability', 'distribution', 'statistics', 'expectation' are conspicuously absent from his text.

What I'm saying, we know what he says is basically right, because it is backed by the machinery of Bell's mathematical proof. Without it, if taken literally, it is full of holes you can ride an elephant through. As such, it is susceptible to the factorization argument, you know the kind usually pushed by Bill here, to which the original Bell's proof is immune.

And since Herbert's 'proof' has neither assumptions nor conclusions, people can argue about the meaning of it until the Second Coming of the Great Prophet Zarquon
Rather than waiting for Zarquon, why don't you look at the restatement of Herbert's proof in my blog post here, and tell me what flaws or gaps you see in that?
 
  • #87
Delta Kilo said:
If it is, he fails to mention that. Words such as 'probability', 'distribution', 'statistics', 'expectation' are conspicuously absent from his text.
Words like "distribution" and "statistics" are replaced in Herbert's text by the applied statistic observation terms such as "seemingly random 50/50 sequence of zeros and ones", "Match between the two sequences", etc.
[..] if taken literally, it is full of holes you can ride an elephant through. As such, it is susceptible to the factorization argument, you know the kind usually pushed by Bill here [..]
I'm afraid that apart of a misrepresentation of real data from optics experiments (thanks again Zonde - and also thanks gil, sorry that I couldn't follow your explanation!), I don't know any of those "holes" although I asked for them. Please elaborate!
And since Herbert's 'proof' has neither assumptions nor conclusions [..]
Sure it has, as discussed earlier in this thread.
 
Last edited:
  • #88
harrylin said:
Although we surely disagreed about what Herbert did not write, I think that we fully agreed about what Herbert wrote - and this is also what I referred to in my reply to Delta Kilo. Perhaps DK understood "completely identical" to mean a fixed code that can be repeated, but obviously that isn't what is measured nor what Herbert meant.
I assumed Delta Kilo was just referring to the bits of the 0 degree binary message, which are shared by the two particles (that is, the Nth bit of the 0 degree binary message is shared by the 2 photons in the Nth particle pair).
 
  • #89
harrylin said:
Words like "distribution" and "statistics" are replaced in Herbert's text by the applied statistic observation terms such as "seemingly random 50/50 sequence of zeros and ones", "Match between the two sequences", etc.
I'll leave it to linguists to figure out.

harrylin said:
I don't know any of those "holes" although I asked for them. Please elaborate!
One hole is direct reference in the 'proof' to the number of mismatches for different angles for the same coded sequence, which leaves it open to the argument that this is not what is measured in the actual experiment. And unlike Bell, in this particular case the argument is valid.

Another hole is failure to mention statistical character of the 'proof'. He does not say how long the sequence should be for the 'proof' to be valid. Clearly it does not work with sequence of length 1 and has a fair chance of failing with sequences of smallish number of bits.

harrylin said:
Sure it has, as discussed earlier in this thread.
As I'm sure it will be discussed, again and again. In the absence of rigorous formulation every word of it is subject to personal interpretation (and there is a lot of words), there is just no way to make convincing arguments about it. QM proof without any math in it. Great, just great.

Delta Kilo Over and Out.
 
  • #90
zonde said:
Deviations in that small time interval are explained as detector jitter.
Have to say that this detector jitter seemed rather very "unfair" in respect to different polarization settings (and that might be what Raedt saw in data). But I could not do any further tests without more datasets.
I wrote this from memory (I made analysis ~3 years ago) and have to correct that this jitter seems much more likely to come from electronics instead of detector. And calling it "unfair" might be not very correct as it can have local explanation.

harrylin said:
I understand "detection loophole" to mean incomplete detection due to detector inefficiencies, and I find it very instructive to distinguish that from data picking. Those are very different things - indeed, Weih's data illustrate the importance of that distinction rather well, it was even what I had in mind.

Could you make that dataset available to physicsforums? I think that it's very instructive for this group to have access to real data instead of fantasy data such as presented by Herbert.
I think you are missing the positive side of Herbert's fantasy data. Real experiments have a lot of different imperfections and it is really good to have some simple baseline that can help you sort important things from other things.

But you can PM me and I will send you the dataset.

harrylin said:
That's the third explanation that I see (earlier ones that I saw were "noise" and "non-entangled pairs"); I guess it's really high time to start a topic about ad hoc explanations!
Take a look at this paper:
A Close Look at the EPR Data of Weihs et al
It basically does analogous analysis that I have made.

And there is another one from the same author:
Explaining Counts from EPRB Experiments: Are They Consistent with Quantum Theory?

If you are interested in comparing that analysis with mine I have some excel file left from my analysis: see attachment
 

Attachments

Similar threads

  • · Replies 95 ·
4
Replies
95
Views
9K
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
Replies
44
Views
5K
Replies
60
Views
11K
  • · Replies 37 ·
2
Replies
37
Views
6K
  • · Replies 23 ·
Replies
23
Views
5K
  • · Replies 3 ·
Replies
3
Views
3K
Replies
26
Views
4K