Is action at a distance possible as envisaged by the EPR Paradox.

In summary: QM?In summary, John Bell was not a big fan of QM. He thought it was premature, and that the theory didn't yet meet the standard of predictability set by Einstein.
  • #946
nismaratwork said:
I can see why Dirac disdained this kind of pondering, which in the end has little or nothing to do with the work of physics and its applications in life.
There's a real point here. If the motivation in defining a realistic mechanism is simply to sooth a preexisting philosophical disposition, then such debates have nothing to do with anything. However, some big game remains in physics, perhaps even the biggest available. If farther constraints can be established, or constraints that have been overly generalized better defined, it might turn out to be of value.

As DevilsAvocado put it, little "green EPR men" are not a very satisfactory theoretical construct. Realist want to avoid them with realistic constructs, with varying judgment on what constitutes realistic. Non-realist avoid them by denying the realism of the premise. In the end, the final product needs only a formal description with the greatest possible predictive value, independent of our philosophical sensibilities.
 
Physics news on Phys.org
  • #947
JesseM said:
I'm glad you're still asking questions, but if you don't really understand the proof, and you do know it's been accepted as valid for years by mainstream physicists, doesn't it make sense to be a little more cautious about making negative claims about it like this one from an earlier post?

ThomasT said:
I couldn't care less if nonlocality or ftl exist or not. In fact, it would be very exciting if they did. But the evidence just doesn't support that conclusion.
I understand the proofs of BIs. What I don't understand is why nonlocality or ftl are seriously considered in connection with BI violations and used by some to be synonymous with quantum entanglement.

The evidence supports Bell's conclusion that the form of Bell's (2) is incompatible with qm and experimental results. But that's not evidence, and certainly not proof, that nature is nonlocal or ftl. (I think that most mainstream scientists would agree that the assumption of nonlocality or ftl is currently unwarranted.) I think that a more reasonable hypothesis is that Bell's (2) is an incorrect model of the experimental situation.

Which you seem to agree with:
JesseM said:
It (the form of Bell's 2) shows how the joint probability can be separated into the product of two independent probabilities if you condition on the hidden variables ?. So, P(AB|ab?)=P(A|a?)*P(B|b?) can be understood as an expression of the locality condition. But he obviously ends up proving that this doesn't work as a way of modeling entanglement...it's really only modeling a case where A and B are perfectly correlated (or perfectly anticorrelated, depending on the experiment) whenever a and b are the same, under the assumption that there is a local explanation for this perfect correlation (like the particles being assigned the same hidden variables by the source that created them).

Why doesn't the incompatibility of Bell's (2) with qm and experimental results imply nonlocality or ftl? Stated simply by DA, and which you (and I) agree with:
DevilsAvocado said:
Bell's(2) is not about entanglement, Bell's(2) is only about the Hidden variable .
 
  • #948
ThomasT said:
I understand the proofs of BIs. What I don't understand is why nonlocality or ftl are seriously considered in connection with BI violations and used by some to be synonymous with quantum entanglement.
Yes, you don't understand it, but mainstream physicists are in agreement that Bell's equations all follow directly from local realism plus a few minimal assumptions (like no parallel universes, no 'conspiracies' in past conditions that predetermine what choice the experimenter will make on each trial and tailor the earlier hidden variables to those future choices), so why not consider the probability that the problem likely lies with your understanding rather than that of all those physicists for decades?
ThomasT said:
The evidence supports Bell's conclusion that the form of Bell's (2) is incompatible with qm and experimental results.
And (2) would necessarily be true in all local realist theories that satisfy those few minimal assumptions. (2) is not in itself a separate assumption, it follows logically from the postulate of local realism.
ThomasT said:
But that's not evidence, and certainly not proof, that nature is nonlocal or ftl. (I think that most mainstream scientists would agree that the assumption of nonlocality or ftl is currently unwarranted.)
They would agree that it's warranted to rule out local realist theories. Do you disagree with that? Of course this doesn't force you to believe in ftl, you are free to just drop the idea of an objective universe that has a well-defined state even when we're not measuring it (which is basically the option taken by those who prefer the Copenhagen interpretation), or consider the possibility that each measurement splits the experimenter into multiple copies who see different results (many-worlds interpretation), or consider the possibility of some type of backwards causality that can create the kind of "conspiracies" I mentioned.
ThomasT said:
I think that a more reasonable hypothesis is that Bell's (2) is an incorrect model of the experimental situation.
Local realism is an incorrect model, but (2) is not a separate assumption from local realism, it would be true in any local realist theory.
ThomasT said:
Why doesn't the incompatibility of Bell's (2) with qm and experimental results imply nonlocality or ftl?
It implies the falsity of local realism, which means if you are a realist who believes in an objective universe independent of our measurements, and you don't believe in any of the "weird" options like parallel worlds or "conspiracies", your only remaining option is nonlocality/ftl.
 
Last edited:
  • #949
DrChinese said:
As far as I can see, there is currently very high detection efficiencies. From Zeilinger et al:

These can be characterized individually by measured visibilities, which were: for the source, ≈ 99% (98%) in the H/V (45°/135°) basis; for both Alice’s and Bob’s polarization analyzers, ≈ 99%; for the fibre channel and Alice’s analyzer (measured before each run), ≈ 97%, while the free-space link did not observably reduce Bob’s polarization visibility; for the effect of accidental coinci-dences resulting from an inherently low signal-to-noise ratio (SNR), ≈ 91% (including both dark counts and multipair emissions, with 55 dB two-photon attenuation and a 1.5 ns coincidence window).

Violation by 16 SD over 144 kilometers.
http://arxiv.org/abs/0811.3129
What has visibility in common with detection efficiency. :bugeye:
Visibility=(coincidence-max - coincidence-min)/(coincidence-max + coincidence-min)
Efficiency=coincidence rate/singlet rate
 
  • #950
JesseM said:
A few papers I came across suggested that experiments which closed both the detector efficiency loophole and the locality loophole simultaneously would likely be possible fairly soon. If someone offered to bet Bill a large sum of money that the results of these experiments would continue to match the predictions of QM (and thus continue to violate Bell inequalities that take into account detector efficiency), would Bill bet against them?
Interesting. And do those papers suggest at least approximately what kind of experiments they will be?
Or is it just very general idea?

Besides if you want to discuss some betting with money you are in the wrong place.
 
  • #951
nismaratwork said:
I can see why Dirac disdained this kind of pondering, which in the end has little or nothing to do with the work of physics and its applications in life.

So true. It would be nice if while debating the placement of punctuation and the definition of words in the language we speak daily, we perhaps reminded ourselves of the importance of predictions and related experiments. Because every day, there are fascinating new experiments involving new forms of entanglement. That would be the same "action at a distance" as envisioned in this thread which some think they have "disproven".

And just to prove that, just check out the following link:

As of this morning, this represented 572 articles on the subject - many theoretical but also many experimental - on entanglement and Bell.

Oh, and that would be so far in 2010. Please folks, get a grip. You don't need to take my word for it. Read about 50 or 100 of these papers, and you will see that these issues are being tackled every day by physicists who wake up thinking about this. And you will also see mixed in many interesting alternative ideas which are out of the mainstream: these articles are not all peer reviewed. Look for a Journal Reference to get those, which tend to be mainstream and higher quality overall. Many experimental results will be peer reviewed.
 
Last edited:
  • #952
zonde said:
What has visibility in common with detection efficiency. :bugeye:
Visibility=(coincidence-max - coincidence-min)/(coincidence-max + coincidence-min)
Efficiency=coincidence rate/singlet rate

They are often used differently in different contexts. The key is to ask: what pairs am I attempting to collect? Did I collect all of those pairs? Once I collect them, was I able to deliver them to the beam splitter? Of those photons going through the beam splitter, what % were detected? By analyzing carefully, the experimenter can often evaluate these questions. In state of the art Bell tests, these can be important - but not always. Each test is a little different. For example, if fair sampling is assumed then strict evaluation of visibility may not be important. But if you are testing the fair sampling assumption as part of the experiment, it would be an important factor.

Clearly, the % of cases where there is a blip at Alice's station but not Bob's (and vice versa) is a critical piece of information where fair sampling is concerned. If you subtract that from 100%, you get a number. I believe this is what is referred to as visibility by Zeilinger but honestly it is not always clear to me from the literature. Sometimes this may be called detection efficiency. At any rate, there are several distinct issues involved.

Keep in mind that for PDC pairs, the geometric angle of the collection equipment is critical. Ideally, you want to get as many entangled pairs as possible and as few unentangled as possible. If alignment is not correct, you will miss entangled pairs. You may even mix in some unentangled pairs (which will reduce your results from the theoretical max violation of a BI). There is something of a border at which getting more entangled is offset by getting too many more unentangled. So it is a balancing act.
 
  • #953
ThomasT said:
I understand the proofs of BIs. What I don't understand is why nonlocality or ftl are seriously considered in connection with BI violations and used by some to be synonymous with quantum entanglement.
In defining the argument, the assumptions and consequences had to be enumerated, regardless of how unlikely one or the other potential consequences might be from some point of view. IF something physical actually traverses that space, in the alloted time, to effect the outcomes as measured, realism is saved. Doesn't matter how reasonable or silly it might be, given the "IF" the fact follows, thus must be included in the range of potentials.

JesseM said:
Yes, you don't understand it, but mainstream physicists are in agreement that Bell's equations all follow directly from local realism plus a few minimal assumptions (like no parallel universes, no 'conspiracies' in past conditions that predetermine what choice the experimenter will make on each trial and tailor the earlier hidden variables to those future choices), so why not consider the probability that the problem likely lies with your understanding rather than that of all those physicists for decades?
This is the worst possible argument possible. Almost precisely the same argument my friend, that turned religious, used to try to convert me. It's invalid in any context, no matter how solid the claim it's used to support. I cringe no matter how trivially true such a statement is used to support. So if the majority "don't understand", as you have stated for yourself, acceptance of this argument makes the majority acceptance a self fulfilled prophesy.

JesseM said:
And (2) would necessarily be true in all local realist theories that satisfy those few minimal assumptions. (2) is not in itself a separate assumption, it follows logically from the postulate of local realism.
You call it a postulate of local realism, but fail to mention that this "postulate of local realism" is predicated on a very narrowly defined 'operational' definition, which even its originators (EPR) disavowed it, at the time it was proposed, as the only, sufficiently complete, etc., as a complete definition. It was a definition that I personally rejected at a very young age, before I ever heard of EPR or knew what QM was. Solely on classical grounds, but related to some ideas DrC used to reject Hume realism. Now such silly authority arguments, as provided above, is used to make demands that I drop "realism", because somebody generalized an 'operational' definition that I rejected in my youth, and proved it false. Am I supposed to be in awe of that?

JesseM said:
It implies the falsity of local realism, which means if you are a realist who believes in an objective universe independent of our measurements, and you don't believe in any of the "weird" options like parallel worlds or "conspiracies", your only remaining option is nonlocality/ftl.
Unequivocally false. There are other options, unless you want to insist that one 'operational' definition is by academic definition the only available definition of realism available. Even then it doesn't make you right, you have only chosen a definition to insure you can't be wrong.

There are whole ranges of issues involved. Many of which may have some philosophical content that don't strictly belong in science, unless of course you can formalize it into something useful. Yet the "realism" claim associated with Bell is a philosophical claim, by taking a formalism geared toward a single 'operational' definition and expanding it over the entire philosophical domain of realism. It's a massive composition fallacy.

The composition fallacy runs even deeper. There's the assumption that the things we measure are existential in the sense of things. Even if every possible measurable we are capable of is provably no more real than a coordinate choice, it is NOT proof that things don't exist independent of being measured (the core assumption of realism). Or that a theoretical construct can't build an empirically consistent emergent system based on existential things. Empirical completeness and the completeness of nature is not synonymous. Fundamentally, realism is predicated on measurement independence, and cannot be proved false on the grounds that an act of measurement has effects. If it didn't have effects, measurements would be magical. Likewise, in a strict realist sense, an existential thing, independent variable, which has independent measurable properties is also a claim of magic.

So please, at least qualify local realism with "Einstein realism", "Bell realism", or some other suitable qualifier, so as not to make the absurd excursion into a blanket philosophical claim that the entire range of all forms of "realism" are provably falsified. It turns science into a philosophical joke, whether right or wrong. If this argument is overly philosophical, sorry, that what the blanket claim that BI violations falsifies local realism imposes.
 
Last edited:
  • #954
my_wan said:
It turns science into a philosophical joke, whether right or wrong. If this argument is overly philosophical, sorry, that what the blanket claim that BI violations falsifies local realism imposes.

Does it help if we say that BI violations blanket falsify claims of EPR (or Bell) locality and EPR (or Bell) realism? Because if words are to have meaning at all, this is the case.
 
  • #955
DrChinese said:
So true. It would be nice if while debating the placement of punctuation and the definition of words in the language we speak daily, we perhaps reminded ourselves of the importance of predictions and related experiments. Because every day, there are fascinating new experiments involving new forms of entanglement. That would be the same "action at a distance" as envisioned in this thread which some think they have "disproven".

And just to prove that, just check out the following link:

As of this morning, this represented 572 articles on the subject - many theoretical but also many experimental - on entanglement and Bell.

Oh, and that would be so far in 2010. Please folks, get a grip. You don't need to take my word for it. Read about 50 or 100 of these papers, and you will see that these issues are being tackled every day by physicists who wake up thinking about this. And you will also see mixed in many interesting alternative ideas which are out of the mainstream: these articles are not all peer reviewed. Look for a Journal Reference to get those, which tend to be mainstream and higher quality overall. Many experimental results will be peer reviewed.

I like this approach very much. We should never forget the need for what works, and how it works in the midst of WHY it works.
 
  • #956
my_wan said:
This is the worst possible argument possible. Almost precisely the same argument my friend, that turned religious, used to try to convert me. It's invalid in any context, no matter how solid the claim it's used to support. I cringe no matter how trivially true such a statement is used to support. So if the majority "don't understand", as you have stated for yourself, acceptance of this argument makes the majority acceptance a self fulfilled prophesy.
Huh? I said it was ThomasT who didn't understand Bell's proof, not the majority of physicists. And in technical subjects like science and math, I think it's perfectly valid to say that if some layman doesn't understand the issues very well but is confused about the justification for some statement that virtually all experts endorse, the default position of a layman showing intellectual humility should be that it's more likely the mistake lies with his/her own understanding, rather than taking it as a default that they've probably found a fatal flaw that all the experts have overlooked and proceeding to try to convince others of that. Of course this is just a sociological statement about likelihood that a given layman has actually discovered something groundbreaking, I'm not trying to argue that anyone should take the mainstream position on faith or not bother asking questions about the justification for this position. But if you don't take this advice there's a good chance you'll fall victim to the Dunning-Kruger effect, and perhaps also become the type of "bad theoretical physicist" described by Gerard 't Hooft here.
my_wan said:
You call it a postulate of local realism, but fail to mention that this "postulate of local realism" is predicated on a very narrowly defined 'operational' definition, which even its originators (EPR) disavowed it, at the time it was proposed, as the only, sufficiently complete, etc., as a complete definition.
I define "local realism" to mean that facts about the complete physical state of any region of spacetime can be broken down into a sum of local facts about the state of individual points in spacetime in that region (like the electromagnetic field vector at each point in classical electromagnetism), and that each point can only be causally influenced by other points in its past light cone. Do you think that this is too "narrowly defined" or that EPR would have adopted a broader definition where the above wasn't necessarily true? (if so, can you provide a relevant quote from them?) Or alternatively, do you think that Bell's derivation of the Bell inequalities requires a narrower definition than the one I've just given?
my_wan said:
Unequivocally false. There are other options, unless you want to insist that one 'operational' definition is by academic definition the only available definition of realism available. Even then it doesn't make you right, you have only chosen a definition to insure you can't be wrong.
I don't know what you mean by "operational", my definition doesn't appear to be an operational one but rather an objective description of the way the laws of physics might work. If you do think my definition is too narrow and that there are other options, could you give some details on what a broader definition would look like?
my_wan said:
There are whole ranges of issues involved. Many of which may have some philosophical content that don't strictly belong in science, unless of course you can formalize it into something useful. Yet the "realism" claim associated with Bell is a philosophical claim, by taking a formalism geared toward a single 'operational' definition and expanding it over the entire philosophical domain of realism. It's a massive composition fallacy.
In a scientific/mathematical field it's only meaningful to use terms like "local realism" if you give them some technical definition which may be different than their colloquial meaning or their meaning in nonscientific fields like philosophy. So if a physicist makes a claim about "local realism" being ruled out, it doesn't really make sense to say the claim is a "fallacy" on the basis of the fact that her technical definition doesn't match how you would interpret the meaning of that phrase colloquially or philosophically or whatever. That'd be a bit like saying "it's wrong to define momentum as mass times velocity, since that definition doesn't work for accepted colloquial phrases like 'we need to get some momentum going on this project if we want to finish it by the deadline'".
my_wan said:
The composition fallacy runs even deeper. There's the assumption that the things we measure are existential in the sense of things.
Not sure what you mean. Certainly there's no need to assume, for example, that when you measure different particle's "spins" by seeing which way they are deflected in a Stern-Gerlach device, you are simply measuring a pre-existing property which each particle has before measurement (so each particle was already either spin-up or spin-down on the axis you measure).
my_wan said:
Even if every possible measurable we are capable of is provably no more real than a coordinate choice
Don't know what you mean by that either. Any local physical fact can be defined in a way that doesn't depend on a choice of coordinate system, no?
my_wan said:
it is NOT proof that things don't exist independent of being measured (the core assumption of realism).
Since I don't know what it would mean for "every possible measurable we are capable of is provably no more real than a coordinate choice" to be true, I also don't know why the truth of this statement would be taken as "proof that things don't exist independent of being measured". Are you claiming that any actual physicists argue along these lines? If so, can you give a reference or link?
my_wan said:
Fundamentally, realism is predicated on measurement independence, and cannot be proved false on the grounds that an act of measurement has effects.
I don't see why, nothing about my definition rules out the possibility that the act of measurement might always change the system being measured.
my_wan said:
So please, at least qualify local realism with "Einstein realism", "Bell realism", or some other suitable qualifier, so as not to make the absurd excursion into a blanket philosophical claim that the entire range of all forms of "realism" are provably falsified.
All forms compatible with my definition of local realism are incompatible with QM. I don't know if you would have a broader definition of "local realism" than mine, but regardless, see my point about the basic independence of the technical meaning of terms and their colloquial meaning.
 
Last edited:
  • #957
zonde said:
Interesting. And do those papers suggest at least approximately what kind of experiments they will be?
Or is it just very general idea?
See for example this paper and this one...the discussion seems fairly specific.
zonde said:
Besides if you want to discuss some betting with money you are in the wrong place.
I was just trying to get a sense of whether Bill actually believed himself it was likely that all the confirmation of QM predictions in these experiments would turn out to be a consequence of a local realist theory that was "exploiting" both the detector efficiency loophole and the locality loophole simultaneously, or if he was just scoffing at the fact that experiments haven't closed both loopholes simultaneously for rhetorical purposes (of course there's nothing wrong with pointing out the lack of loophole-free experiments in this sort of discussion, but Bill's triumphant/mocking tone when pointing this out would seem a bit hollow if he didn't actually think such a loophole-exploiting local theory was likely).
 
  • #958
DrChinese said:
Does it help if we say that BI violations blanket falsify claims of EPR (or Bell) locality and EPR (or Bell) realism? Because if words are to have meaning at all, this is the case.
That is in fact the case. BI violations do in fact rule out the very form of realism it was predicated on. "EPR local realism" would be fine, as that contains the source of the operational definition Bell did in fact falsify. Some authors already do this, like Adan Cabello spelled it out as "Einstein-Podolsky-Rosen element of reality" (Phys. Rev. A 67, 032107 (2003)). Perfectly acceptable.

As an aside, I really doubt that any given individual element of "physical" reality, assuming such exist and realism holds, corresponds to any physically measurable quantity. This does not a priori preclude a theoretical construct from successfully formalizes such elements. Note how diametrically opposed this is to the operational definition used by EPR:
“If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity

The original EPR paper gave a more general definition of realism that wasn't so contingent on the operational definition:
"every element of physical reality must have a counterpart in physical theory."
Though this almost certainly implicitly assumed some correspondence I consider more than a little suspect, it doesn't a priori assume an element of physical reality has a direct correspondence with observables. Neither do I consider an empirically complete theory incomplete on the grounds that unobservables may be presumed to exist yet not be defined by the theory. That also opposes Einstein's realism. However, certain issues, such as the vacuum catastrophe, GR + QM, dark matter/energy, etc., is a fairly good if presumptuous indicator of incompleteness.

These presumptions, which opposes the EPR definition, began well before I was a teenager or had any clue about EPR or QM, and was predicated on hard realism. Thus I can't make specific claims, I can only state that blanket philosophical claims of what BI violations falsifies is a highly unwarranted composition fallacy, not that the claim is ultimately false.
 
  • #959
JesseM said:
I was just trying to get a sense of whether Bill actually believed himself it was likely that all the confirmation of QM predictions in these experiments would turn out to be a consequence of a local realist theory that was "exploiting" both the detector efficiency loophole and the locality loophole simultaneously, or if he was just scoffing at the fact that experiments haven't closed both loopholes simultaneously for rhetorical purposes
And as I explained, I do not engage in these discussions for religious purposes, so I'm surprised why you would expect me to bet on. A claim has been made about the non-locality of the universe. I and others, have raised questions about the premises used to supporting that claim. Rather than explain why the premises are true, you expect me rather to bet that the claim is not true. In fact the suggestion is itself maybe suggestive of your approach to these discussions, which I do not consider to be about winning or losing an argument but about understanding the truth of the issues infront of us.

The fact that QM and experiments agree is a big hint that the odd-man out (Bell inequalities) does not model the same thing as QM does, which is what is realized in real experiments. There is no question about this. I think you agree with this. So I'm not sure why you think by repeatedly mentioning the fact that numerous experiments have agreed with QM, it somehow advances your argument. It doesn't. Also the phrase "experimental loopholes" is a misnomer because it gives the false impression that there is something "wrong" with the experiments, such that "better" experiments have to be performed. This is a backward look at it. Every so-called "loophole" is actually a hidden assumption made by Bell in deriving his inequalities.

When I mentioned "assumption" previously, you seemed to express surprise, despite the fact that I have already pointed out to you several times hidden assumptions within Bell's treatment that make it incompatible with Aspect-type experiments. If anyone or more of any assumptions in Bell's treatment are not met in the experiments, Bell's inequalities will not apply. The locality assumption is explicit in Bell's treatment, so Bell's proponents think violation of the inequalities definitely means violation of the locality principle. But there are other hidden assumptions such as:

1) Every photon pair will be detected (due to choice of only +/- as possible outcomes)
2) P(lambda) is equivalent for each of the terms of the inequality
3) Datasets of pairs are extracted from a dataset of triples
4) Non-contextuality
5) ...

And the others I have not mentioned or are yet to be discovered. So whenever you hear about "detection efficiency loophole", the issue really is a failure of hidden assumption (1). And the other example I just gave a few posts back about cyclicity and indexing, involves the failure of (2) and (3).

It is therefore not surprising that some groups have reported on locally causal explanations of many of these Bell-test experiments, again confirming that the problem is in the hidden assumptions used by Bell, not in the experimenters.

(of course there's nothing wrong with pointing out the lack of loophole-free experiments in this sort of discussion, but Bill's triumphant/mocking tone when pointing this out would seem a bit hollow if he didn't actually think such a loophole-exploiting local theory was likely).
I make an effort to explain my point of view, you are free to completely demolish it with legitimate arguments. I will continue to point out the flaws I see in your responses (as long as a relevant response can be descerned from them), and if your arguments are legitimate, I will change my point of view accordingly. But if you can not provide a legimate argument and you think the goal of discussion as one of winning/losing, you may be inclined to interprete my conviction about my point of view to be "triumphant/mocking". But that is just your perspective and you are entitled to it, even if it is false.
 
  • #960
billschnieder said:
It is therefore not surprising that some groups have reported on locally causal explanations of many of these Bell-test experiments, again confirming that the problem is in the hidden assumptions used by Bell, not in the experimenters.

When you say "explanations", I wonder exactly what qualifies as an explanation. The only local realistic model I am aware of is the De Raedt et al model, which is a computer simulation which satisfies Bell. All other local explanations I have seen are not realistic or have been generally refuted (e.g. Christian, etc). And again, by realistic, I mean per the Bell definition (simultaneous elements of reality, settings a, b and c).
 
  • #961
billschnieder said:
As I mentioned to you earlier, it is your opinion here that is wrong.
Are you saying that Leggett and Garg themselves claimed that their inequality should apply to situations where the three values a,b,c don't represent times of measurement, including the scenario with doctors collecting data on patients from different countries? If so, can you quote from the paper since it doesn't seem to be available freely online? Or are you making some broader claim that the reasoning Leggett and Garg used could just as easily be applied to other scenarios, even if they themselves didn't do this?
billschnieder said:
Of course, the LGI applies to the situation you mention, but inequalities of that form were originally proposed by Boole in 1862 (see http://rstl.royalsocietypublishing.org/content/152/225.full.pdf+html) and had nothing to do with time. All that is necessary for it to apply is n-tuples of two valued (+/-) variables. In Boole's case it was three boolean variables. The inequalities result simply from arithmetic, and nothing else.
We perform an experiment in which each data point consists of triples of data such as (i,j,k). Let us call this set S123. We then decide to analyse this data by extracting three data sets of pairs such as S12, S13, S23. What Boole showed was essentially if i, j,k are two valued variables, no matter the type of experiment generating S123, the datasets of pairs extracted from S123 will satisfy the inequalities:

|<S12> +/- <S13>| <= 1 +/- <S23>
The paper by Boole you linked to is rather long and he doesn't seem to use the same notation, can you point me to the page number where he derives an equivalent equation so I can see the discussion leading up to it? I would guess he was assuming that we were picking which pairs to extract from each triple in a random way so that there'd be no possibility of a systematic correlation between our choice of which pair to extract and the values of all members of the triplet S123 (and even if Boole neglected to explicitly mention such an assumption I'd assume you could find later texts on probability which did). And this would be equivalent to the assumption Leggett and Garg made of "noninvasive measurement", that the choice of which times to measure a given particle or system aren't correlated with the probability of different hidden classical histories the particle/system might be following. So if you construct an example where we would expect a correlation between what pair of values are sampled and the underlying facts about the value for all three possibilities, then I expect neither Boole nor Leggett and Garg would finding it surprising or contrary to their own proofs that the inequalities would no longer hold.
billschnieder said:
You can verify that this is Bell's inequality (replace 1,2,3 with a,b,c,).
You mean equation (15) in his original paper? But in his derivations the hidden variable λ can represent conditions that occur before the experimenters make a random choice of detector settings (see p. 242 of Speakable and Unspeakable in Quantum Mechanics, so there's good justification for saying λ should be independent of detector settings, and in any case this is explicitly includes as the "no-conspiracy condition" in rigorous proofs of Bell's theorem.
billschnieder said:
So a violation of these inequalities by data, points to mathematically incorrect treatment of the data.
A violation of the inequalities by data which doesn't match the conditions Bell and Leggett-Garg and Boole assumed when deriving them doesn't indicate a flaw in reasoning which says the inequalities should hold if the conditions are met.
JesseM said:
I also found the paper where you got the example with patients from different countries here,
billschnieder said:
That is why I gave you the reference before, have you read it, all of it?
You mentioned the name of the paper but didn't give a link, when I said I "found" it I just meant I had found an online copy. And no, I didn't read it all the way through, just enough sections that I thought I got the idea of how they thought the scenario with patients from different countries was supposed to be relevant to Leggett-Garg. If there is some particular section you think I should pay more attention to, feel free to point to it.
JesseM said:
To this critique appears to be rather specific to the Leggett-Garg inequality, maybe you could come up with a variation for other inequalities but it isn't obvious to me (I think the 'noninvasive measurements' condition would be most closely analogous to the 'no-conspiracy' condition in usual inequalities, but the 'no-conspiracy' condition is a lot easier to justify in terms of local realism when λ can refer to the state of local variables at some time before the experimenters choose what detector settings to use)
billschnieder said:
This is not a valid criticism for the following reason:

1) You do not deny that the LGI is a Bell-type inequality. Why do you think it is called that?
Because the derivation is closely analogous and the conclusion (that QM is incompatible with certain assumptions about 'hidden' objective facts that determine measurement outcomes) is also quite similar. However, the assumptions in the derivation do differ from the assumptions in other Bell-type proofs even if they are very analogous (like the no-conspiracy assumption being replaced by the noninvasive measurement assumption).
billschnieder said:
2) You have not convincingly argued why the LGI should not apply to the situation described in the example I presented
I don't have access to the original Leggett-Garg paper, but this paper which I linked to before says:
In a paper provocatively entitled "Quantum Mechanics versus Macroscopic Realism: Is the Flux There when Nobody Looks? A. J. Leggett and A. Garg[1] proposed a way to determine whether the magnetic flux of a SQUID (superconducting quantum interference device) was compatible with the postulates:

(A1) Macroscopic Realism: "A macroscopic system with two or more macroscopically distinct states available to it will at all times be in one or the other of these states."

(A2) Noninvasive Measurability: "It is possible, in principle, to determine the state of the system with arbitrary small perturbation on its subsequent dynamics."
So, the quote after (A2) does indicate that they were assuming the condition that the choice of which two measurements to make isn't correlated with the values the system takes at each of the three possible times. An example which is constructed in such a way that there is a correlation between the two sample points and the three values for each data triplet would be one that isn't meeting this condition, and thus there'd be no reason to expect the inequality to hold for it, so it isn't a flaw in the derivation that you can point to such an example.
billschnieder said:
3) You do not deny the fact that in the example I presented, the inequalities can be violated simply based on how the data is indexed.
Unclear what you mean by "simply based on how the data is indexed". In the example, the Ab in AaAb was taken under consistently different observable experimental conditions than the Ab in AbAc; the first Ab always has a superscript 2 indicating a patient from Lyon, the second Ab always has a superscript 1 indicating a patient from Lille. And they also say:
On even dates we have Aa = +1 and Ac = −1 in both cities while Ab = +1 in Lille and Ab = −1 in Lyon. On odd days all signs are reversed.
So, in this case depending on whether you are looking at the data pair AaAb or AbAc on a given date, the value of Ab is different. And even if you don't know the date information, from an objective point of view (the point of view of an all-knowing omniscient being), this isn't a case where each sample is taken from a "data point" consisting of triplet of objective (hidden) facts about a,b,c, such that the probability distribution on triplets for a sample pair AaAb is the same as the probability distribution on triplets for the other two sample pairs AaAc and AbAc. In the frequentist understanding of probability, this means that in the limit as the number of sample pairs goes to infinity, the frequency at which any given triplet (or any given ordered pair of triplets if the two members of the sample pair are taken from different triplets) is associated with samples of type AaAb should be the same as the frequency at which the same triplet is associated with samples of type AaAc and AbAc. If the "noninvasive measurability" criterion is met in a Leggett-Garg test, this should be true of the measurements at different pairs of times of SQUIDS if local realism is true. Likewise, if the no-conspiracy condition is true in a test of the form Bell discussed in his original paper, this should also be true if local realism is true.
billschnieder said:
4) You do not deny the fact that in the example, there is no way to ensure the data is correctly indexed unless all relevant parameters are known by the experimenters
I would deny that, at least in the limit as the number of data points becomes very large. In this case they could could just pool all their data, and use a random process (like a coinflip) to decide whether each Aa should be put in a pair with an Ab data point or an Ac data point, and similarly for the other two.
billschnieder said:
5) You do not deny that Bell's inequalities involve pairs from a set of triples (a,b,c) and yet experiments involve triples from a set of pairs.
I certainly deny this too, in fact I don't know what you can be talking about here. Different inequalities involve different numbers of possible detector settings, but if you look at any particular experiment designed to test a particular inequality, you always find the same number of possible detector settings in the inequality as in the experiment. If you disagree, point me to a particular experiment where you think this wasn't true!
billschnieder said:
6) You do not deny that it is impossible to measure triples in any EPR-type experiment, therefore Bell-type inequalities do not apply to those experiments.
This one is so obviously silly you really should know better. The Bell-type inequalities are based on the theoretical assumption that on each trial there is a λ which either predetermines a definition outcome for each of the three detector settings (like the 'hidden fruits' that are assumed to be behind each box on my scratch lotto analogy), or at least predetermines a probability for each of the three which is not influenced by what happens to the other particle at the other detector (i.e. P(A|aλ) is not different from P(A|Bbaλ)). If this theoretical assumption were valid, and the probability of different values of λ on each trial did not depend on the detector settings a and b on that trial, then this would be a perfectly valid situation where these inequalities would be predicted to hold. Of course we don't know if these theoretical assumptions actually hold in the real world, but that's the point of testing whether the inequalities hold up in the real world--if they don't, and our experiments meet the necessary observable conditions that were assumed in the derivation, then this constitutes an experimental falsification of one of the predictions of our original theoretical assumptions.
billschnieder said:
Boole had shown 100+ years ago that you can not substitute Rij for Sij in those type of inequalities.
I don't know what you mean by "Rij".
 
Last edited by a moderator:
  • #962
JesseM said:
Huh? I said it was ThomasT who didn't understand Bell's proof, not the majority of physicists.
Oops, my apologies.

Wrt the Dunning-Kruger effect I agree. However, if you applied the same standard to the best educated people of the past they would be subject to the same illusory superiority. Though they likely had the skills to overcome it with the right information. So for those who simply insist X is wrong, you have a point. I'm not denying the validity of Bell's theorem in conclusively ruling out a brand of realism. I'm denying the generalization of that proof to all forms of realism.

I have fallen victim to assuming X, which apparently entailed Y, and tried to maintain X by maintaining Y, only to realize X could be maintained without Y. It happens. But in an interesting subject it's not always in our best interest to take an authority at face value. Rather to question it. The denial and accusations that the authority is wrong, silly, etc., without a very solid and convincing argument is just being a crackpot. Yet authoritative sources can also overestimate the generality a given knowledge endows also.

JesseM said:
I define "local realism" to mean that facts about the complete physical state of any region of spacetime can be broken down into a sum of local facts about the state of individual points in spacetime in that region (like the electromagnetic field vector at each point in classical electromagnetism), and that each point can only be causally influenced by other points in its past light cone. Do you think that this is too "narrowly defined" or that EPR would have adopted a broader definition where the above wasn't necessarily true? (if so, can you provide a relevant quote from them?) Or alternatively, do you think that Bell's derivation of the Bell inequalities requires a narrower definition than the one I've just given?
Ok, that works. But I got no response on what effects the non-commutativity of vector products, even classical vectors, has on the computational demands of modeling BI violations. If these elements are transfinite, what role might Hilbert's paradox of the Grand Hotel play in such effects? EPR correlations are certainly not unique in requiring relative offset verses absolute coordinate values. SR is predicated on it. If observables are projections from a space with an entirely different metric, which doesn't commute with a linear metric of the space we measure, that could impart computational difficulties which BI doesn't recognize. I didn't get any response involving Maxwell's equations either.

I'm not trying to make the point that Bell was wrong, he was absolutely and unequivocally right, within the context of the definition he used. I'm merely rejecting the over generalization of that definition. Even if no such realistic model exist, by any definition, I still want to investigate all these different principles that might be behind such an effect. The authoritative claim that Bell was right is perfectly valid, to over generalize that into a sea of related unknowns, even by authoritative sources, is unwarranted.

JesseM said:
I don't know what you mean by "operational", my definition doesn't appear to be an operational one but rather an objective description of the way the laws of physics might work. If you do think my definition is too narrow and that there are other options, could you give some details on what a broader definition would look like?
Physics is contingent upon operational, not philosophical, claims. What did the original EPR paper say about it?
1) "far from exhausting all possible ways to recognize physical reality"
2) "Regarded not as a necessary, but merely as a sufficient, condition of reality"
3) A comprehensive definition of reality is, however, unnecessary for our purposes"

In other words they chose an definition that had an operational value in making the point more concise, not a definition which defined reality itself.

In a scientific/mathematical field it's only meaningful to use terms like "local realism" if you give them some technical definition which may be different than their colloquial meaning or their meaning in nonscientific fields like philosophy. So if a physicist makes a claim about "local realism" being ruled out, it doesn't really make sense to say the claim is a "fallacy" on the basis of the fact that her technical definition doesn't match how you would interpret the meaning of that phrase colloquially or philosophically or whatever. That'd be a bit like saying "it's wrong to define momentum as mass times velocity, since that definition doesn't work for accepted colloquial phrases like 'we need to get some momentum going on this project if we want to finish it by the deadline'".
True, technical definitions confuse the uninitiated quiet often. Locality is one of them, which is predicated on relativity. Thus it's in principle possible to violate locality without violating realism, as the stated EPR consequences recognizes. Yet if "realism" is technically predicated on the operational definition provided by EPR, why reject out of hand definitions counter to that EPR provided as a source of research? That is factually overstepping the technical bounds on which the "realism" used to reject it is academically defined. That's having your cake and eating it to.

JesseM said:
Not sure what you mean. Certainly there's no need to assume, for example, that when you measure different particle's "spins" by seeing which way they are deflected in a Stern-Gerlach device, you are simply measuring a pre-existing property which each particle has before measurement (so each particle was already either spin-up or spin-down on the axis you measure).
Unless observable are a linear projection from a space which has a non-linear mapping to our measured space of variables, to name just one. Nor does realism necessarily entail that pre-existing properties that are measurable. It doesn't even entail that independent variables have any independent measurable properties whatsoever.

JesseM said:
Don't know what you mean by that either. Any local physical fact can be defined in a way that doesn't depend on a choice of coordinate system, no?
Yes, assuming the variables required are algorithmically compressible or finite. I never got this answer either: what does in mean when you can model a rotation of the beam in an EPR model and maintain BI violations while individual photon paths vary, yet the apparently physical equivalent of uniformly rotating the pair of detectors destroys it? Are they not physically the equivalent transform? Why are physically equivalent transforms not physically equivalent? Perhaps the issue of non-commutative classical vector products needs investigated.

JesseM said:
All forms compatible with my definition of local realism are incompatible with QM. I don't know if you would have a broader definition of "local realism" than mine, but regardless, see my point about the basic independence of the technical meaning of terms and their colloquial meaning.
If that is your definition of local realism, fine. But you can't make claims that your definition precludes alternatives, or that alternatives are precluded by your definition. You want a broader "technical" definition of realism? It's not mine, it came from exactly the same source as yours did. "An element of reality that exist independent of any measurement". That's it. Whatever more you add, its relationship with what is measured or measurable, are presumptions that go beyond the general definition. So I would say that the very original source, on which you derive your claim of a "technical" definition, disavows that particular definition as sufficiently general to constitute a general technical definition. Even without being explicitly aware of the issues it now presents.
 
  • #963
billschnieder said:
And as I explained, I do not engage in these discussions for religious purposes, so I'm surprised why you would expect me to bet on.
A nonreligious person can have intuitions and opinions about the likelihood of various possibilities, like the likelihood that improved gravitational wave detectors will in the near future show that general relativity's predictions about gravitational waves are false. If someone doesn't think this is very likely, I would think it a bit absurd for them to gloat about the lack of experimental confirmation of gravitational waves in an advocate with someone taking the mainstream view that general relativity is likely to be accurate at classical scales.
billschnieder said:
I and others, have raised questions about the premises used to supporting that claim. Rather than explain why the premises are true, you expect me rather to bet that the claim is not true.
As you no doubt remember I gave extended arguments and detailed questions intended to show why your claims that Bell's theorem is theoretically flawed or untestable don't make sense, but you failed to respond to most of my questions and arguments and then abruptly shut down the discussion, in multiple cases (As with my posts here and here where I pointed out that your argument about the failure of the 'principle of common cause' ignored the specific types of conditions where it failed as outlined in the Stanford Encyclopedia article you were using as a reference, and I asked you to directly address my argument about past light cones in a local realist universe without relying on nonapplicable statements from the encyclopedia article. Your response here was to ignore all the specific quotes I gave you about the nature of the required conditions and declare that you'd decided we'd have to 'agree to disagree' on the matter rather than discuss it further...if you ever change your mind and decide to actually address the light cone argument in a thoughtful way, you might start by saying whether you disagree with anything in post #63 here).

Of course the point that Bell inequalities might not actually be violated with loophole-free tests is totally separate from the idea that the proof itself is flawed or that perfect tests are impossible in the first place unless we know the values of all hidden variables and can control for them (the arguments you were making earlier). Unlike with those earlier arguments I don't actually disagree with your basic point that they might not be violated with loophole free tests so there's no need for me to try to argue with you about that, I was just using the idea of betting to point to the absurdity of your gloating attitude about the lack of loophole-free tests. I think this gloating rather typifies your "lawyerly" approach to the subject, where you are trying to cast doubt on Bell using rhetorical strategies rather than examine the issues in a detailed and thoughtful manner.
billschnieder said:
The fact that QM and experiments agree is a big hint that the odd-man out (Bell inequalities) does not model the same thing as QM does, which is what is realized in real experiments.
Uh, the whole point of the Bell inequalities is to prove that the assumed conditions they are modeling (local realism) are incompatible with QM! Do you really not understand this after all this time, or is this just another example of "it sounds good rhetorically, who cares if it's really a plausible argument?"
billschnieder said:
So I'm not sure why you think by repeatedly mentioning the fact that numerous experiments have agreed with QM, it somehow advances your argument. It doesn't.
My "argument" is that Bell has a valid proof that local realism and QM are incompatible, and thus that experimental verification of QM predictions about Bell inequality violations also constitute experimental falsification of local realism. Do you really not understand the very basic logic of deriving certain predictions from theoretical assumptions, showing the predictions don't match reality, and therefore considering that this is experimental evidence that the theory doesn't describe the real world? This is just how any theory of physics would be falsified experimentally!
billschnieder said:
Also the phrase "experimental loopholes" is a misnomer because it gives the false impression that there is something "wrong" with the experiments, such that "better" experiments have to be performed. This is a backward look at it. Every so-called "loophole" is actually a hidden assumption made by Bell in deriving his inequalities.
The loopholes are just based on actual experiments not meeting the observable experimental conditions Bell was assuming would hold in the theoretical experiments that the inequalities are supposed to apply to, like the idea that there should be a spacelike separation between measurements (if an actual experiment doesn't conform to this, it falls prey to the locality loophole). None of them are based on whether the theoretical assumptions about the laws of physics used in the derivation (like the assumption that the universe follows local realist laws) are true or false.

To put it another way, Bell proved that (specified observable experimental conditions, like spacelike separation between measurements) + (theoretical assumptions about laws of physics, like local realism) = (Bell inequalities). So, if a real experiment matches all the observable experimental conditions but does not give results which satisfy the Bell inequalities, that's a good experimental falsification of the theoretical assumptions about the laws of physics that Bell made. On the other hand, if an experiment doesn't match all those observable conditions, then even if it violates Bell inequalities there may still be some remaining possibility that the theoretical assumptions actually do apply in our universe (so our universe might still obey local realist laws)
billschnieder said:
When I mentioned "assumption" previously, you seemed to express surprise, despite the fact that I have already pointed out to you several times hidden assumptions within Bell's treatment that make it incompatible with Aspect-type experiments.
And I've pointed out that some of the "hidden assumptions" you claimed were needed, like controlling for all the hidden variables, were not necessary. In this post you even seemed to be starting to get the point when you asked:
Is it your claim that Bell's "population" is defined in terms of "an infinite set of repetitions of the exact observable experimental conditions you were using"? If that is what you mean here then I fail to see the need to make any fair sampling assumption at all.
To which I responded in post #126 on that thread:
In the part in bold I think I made clear that Bell's proof would only apply to the exact observable experimental conditions you were using if it was true that those conditions met the "basic criteria" I mentioned above. I allowed for the possibility that 100% detector efficiency might be one of the conditions needed--DrChinese's subsequent posts seem to say that the original Bell inequalities do require this assumption, although perhaps you can derive other inequalities if the efficiency lies within some known bounds, and he seemed to say that local realist theories which tried to make use of this loophole would need some other physically implausible features. As I said above in my response to #110 though, I would rather keep the issue of the detector efficiency loophole separate from your other critiques of Bell's reasoning, which would seem to apply even if we had an experiment that closed all these known loopholes (and apparently there was one experiment with perfect detector efficiency but it was vulnerable to a separate known loophole).
But of course that didn't go anywhere because you didn't respond to this, and ended up arguing that frequentist definitions of probability were so inherently horrible that you refused to adopt them even for the sake of argument, even if they were the type of probability likely being assumed by Bell in his proof.
billschnieder said:
If anyone or more of any assumptions in Bell's treatment are not met in the experiments, Bell's inequalities will not apply. The locality assumption is explicit in Bell's treatment, so Bell's proponents think violation of the inequalities definitely means violation of the locality principle. But there are other hidden assumptions such as:

1) Every photon pair will be detected (due to choice of only +/- as possible outcomes)
This is an observable experimental condition (at least it's observable whether every detection at one detector is part of a coincidence with a detection at the other, and it shouldn't be possible to come up with a local hidden variables model where the hidden variables influence the chance of nondetection in such a way that if one photon isn't detected the other's guaranteed not to be either despite the random choice of detector settings, and have this lead to a Bell inequality violation).
billschnieder said:
2) P(lambda) is equivalent for each of the terms of the inequality
This is the no-conspiracy assumption, and given that lambda can represent local facts at a time before the experimenters make a choice of which detector setting to use (with the choice made using any random or pseudorandom method they like), it's not hard to see why a theory that violated this would have some very implausible features.
billschnieder said:
3) Datasets of pairs are extracted from a dataset of triples
As I said in my previous post:
The Bell-type inequalities are based on the theoretical assumption that on each trial there is a λ which either predetermines a definition outcome for each of the three detector settings (like the 'hidden fruits' that are assumed to be behind each box on my scratch lotto analogy), or at least predetermines a probability for each of the three which is not influenced by what happens to the other particle at the other detector (i.e. P(A|aλ) is not different from P(A|Bbaλ)). If this theoretical assumption were valid, and the probability of different values of λ on each trial did not depend on the detector settings a and b on that trial, then this would be a perfectly valid situation where these inequalities would be predicted to hold.
So, this just reduces to the assumption of local realism plus the no-conspiracy assumption, it's not an independent assumption.
billschnieder said:
4) Non-contextuality
As I argued in this post, I think you're incorrect that this is necessary for Bell's proof:
In a local realist theory there is an objective truth about which variables are associated with a given point in spacetime (and the values of those variables). This would include any variables associated with the region of spacetime occupied by the moon, and any associated with the region of spacetime occupied by a human. The variables associated with some humans might correspond to a state that we could label "observing the moon", and the variables associated with other humans might correspond to a state we could label "not observing the moon", but the variables themselves are all assumed to have an objective state that does not depend on whether anyone knows about them.

A "contextual" hidden variables theory is one where knowledge of H is not sufficient to predetermine what results the particle will give for any possible measurement of a quantum-mechanical variable like position or momentum, the conditions at the moment of measurement (like the exact state of the measuring device at the time of measurement) can also influence the outcome--see p. 39 here on google books, for example. This doesn't mean that all fundamental variables (hidden or not) associated with individual points in spacetime don't have definite values at all times, it just means that knowing all variables associated with points in the past light cone of the measurement at some time t does not uniquely determine the values of variables in the region of spacetime where the measurement is made (which tell you the outcome of the measurement).
If we assume that the particles always give the same results (or opposite results) when the same detector settings are used, then we can derive from other assumptions already mentioned that this implies the results for each possible setting must be predetermined (making it a contextual theory), I can explain if you like. But Bell derived inequalities which don't depend on this assumption of predetermined results for each setting, see p. 12 of http://cdsweb.cern.ch/record/142461/files/198009299.pdfpapers where he writes:
It was only in the context of perfect correlation (or anticorrelation) that determinism could be inferred for the relation of observation results to pre-existing particle properties (for any indeterminism would have spoiled the correlation). Despite my insistence that the determinism was inferred rather than assumed, you might still suspect somehow that it is a preoccupation with determinism that creates the problem. Note well then that the following argument makes no mention whatever of determinism.
billschnieder said:
5) ...

And the others I have not mentioned or are yet to be discovered. So whenever you hear about "detection efficiency loophole", the issue really is a failure of hidden assumption (1). And the other example I just gave a few posts back about cyclicity and indexing, involves the failure of (2) and (3).
As I point out above, there aren't really that many independent theoretical assumptions, and any theoretical assumptions beyond local realism would require some very weird conditions (like parallel universes, or 'conspiracies' in past conditions that predetermine what choice the experimenter will make on each trial and tailor the earlier hidden variables to those future choices) in order to be violated.
billschnieder said:
I make an effort to explain my point of view, you are free to completely demolish it with legitimate arguments. I will continue to point out the flaws I see in your responses (as long as a relevant response can be descerned from them)
Ah, so as long as you deem it not "relevant" you are free not to address my central arguments, like not explaining what flaws you saw in my reading of the specific quotes from the Stanford Encyclopedia of Philosophy article on the principle of common cause (since your entire refutation to my past light cone argument ended up revolving around quotes from that article), or not even considering whether the probabilistic statements Bell makes might make sense when interpreted in frequentist terms with the correct understanding of the "population" of experiments (with the 'population' being one defined solely in terms of observable experimental conditions with no attempt to 'control for' the value of hidden variables, so that by the law of large numbers any real experiment matching those conditions should converge on the ideal probabilities in a large number of trials if the basic theoretical assumptions like local realism were valid). Both of these were central to my counterarguments to two of your main anti-Bell arguments, the first being that Bell's equation (2) was not legitimately derivable from the assumption of local realism, the second being that it would be impossible in principle to test whether Bell's theoretical assumptions held in the real world without knowing the value of all hidden variables in each experiment and controlling for them. But since you decided these counterarguments weren't "relevant" you simply didn't give them any substantive response.
billschnieder said:
But if you can not provide a legimate argument and you think the goal of discussion as one of winning/losing, you may be inclined to interprete my conviction about my point of view to be "triumphant/mocking". But that is just your perspective and you are entitled to it, even if it is false.
I'll leave it to others to decide whether quotes like the following have a tone of "triumphant" dismissal or whether they simply express an attitude of caution about whether there is a slight possibility the universe obeys local realist laws that exploit both detection loopholes simultaneously:
Now that this blatant error is clear, let us look at real experiments to see which approach is more reasonable, by looking at what proportion of photons leaving the source is actually detected.

For all Bell-test experiments performed to date, only 5-30% of the photons emitted by the detector have been detected, with only one exception. And this exception, which I'm sure DrC and JesseM will remind us of, had other more serious problems. Let us make sure we are clear what this means.

It means of almost all those experiments usually thrown around as proof of non-locality, P(case4) has been at most 30% and even as low as 30% in some cases. The question then is, where did the whopping 70% go?

Therefore it is clear first of all by common sense, then by probability theory, and finally confirmed by numerous experiments that non-detection IS an issue and should have been included in the derivation of the inequalities!
(from post #930--part in bold sounds a bit 'mocking' to me, and note that the claim of 'only one exception' was posted after my post #152 on the 'Understanding Bell's Logic' thread where I told you that other experiments closing the detection loophole had been done)
Therefore correlations observed in real experiments in which non-detection matters can not be compared to idealized theoretical proofs in which non-detection was not considered since those idealized theoretical proofs made assumptions that will never be fulfilled in any real experiments.
(from post #932--a blanket dismissal of the relevance of all 'real experiments', no nuance whatsoever)
What has this got to do with anything. If there was a convincing experiment which fulfilled all the assumptions in Bell's derivation, I would change my mind. I am after the truth, I don't religiously follow one side just because I have invested my whole life to it.
(from #936--sounds rather mocking again, or was there no implication here that others like myself or DrChinese are religiously following one side because we've invested our lives in it?)
JesseM said:
Don't know about that precise inequality, but as I mentioned in an earlier post:
DId I hear ONE with a but attached?
(from post #151 on 'Understanding Bell's Logic'--again, sounds completely dismissive, no actual interest in what the experiment might tell us about the likelihood of a loophole-exploiting hidden variables theory)
 
Last edited by a moderator:
  • #964
JesseM, regarding intellectual humility, don't ever doubt that I'm very thankful that there are educated people like you and DrC willing to get into the details, and explain your current thinking to feeble minded laypersons, such as myself, who are interested in and fascinated by various physics conundrums.

JesseM said:
It (the nonviability of Bell's 2) implies the falsity of local realism, which means if you are a realist who believes in an objective universe independent of our measurements, and you don't believe in any of the "weird" options like parallel worlds or "conspiracies", your only remaining option is nonlocality/ftl.
I think this is a false dichotomy which is recognized as such by mainstream physicists. Otherwise, why wouldn't all physicists familiar with Bell's work believe that nature is nonlocal (the alternative being that nature simply doesn't exist independent of our measurements)?

You've said that Bell's(2) isn't about entanglement. Then how can it's falsification be telling us anything about the nature of entanglement (such as that entangled disturbances are communicating nonlocally)?

And, if it isn't a correct model of the underlying reality, which is one way of looking at it, then how can it's falsification be telling us that an underlying reality doesn't exist?

As you're well aware, there are many physicists quite familiar with Bell's work who don't agree with your statement of the choices entailed by violations of BIs. If, as has been suggested, a majority of physicists think that nature is nonlocal, then why hasn't there been a paradigm shift reflecting that view? Well, I suggest, a reasonable hypothesis would be simply that a majority of physicists don't think that nature is nonlocal. (Though they might agree with the notion of quantum nonlocality, but more on that below.)

In support of that hypothesis, it's noted that Bohm's explicitly nonlocal theory has been around for 60 years. It occupies a certain niche in theoretical and foundational research. But it's certainly not accepted as the mainstream view.

I respectfully have to reject your assessment of the meaning of Bell's(2) and violations of BIs based on it, and your assessment of the mainstream view on this. My guess is that most physicists familiar enough with BIs to make an informed assessment of their physical meaning do not think that their violation implies either that nature is nonlocal or that there's no reality independent of measurements.

JesseM said:
And in technical subjects like science and math, I think it's perfectly valid to say that if some layman doesn't understand the issues very well but is confused about the justification for some statement that virtually all experts endorse, the default position of a layman showing intellectual humility should be that it's more likely the mistake lies with his/her own understanding, rather than taking it as a default that they've probably found a fatal flaw that all the experts have overlooked and proceeding to try to convince others of that.
You're saying that the "statement that virtually all experts endorse" is the dichotomy that nature is either nonlocal (taking, in keeping with the theme of this thread, the term 'nonlocality' to mean 'action-at-a-distance') or that there is no nature independent of observations. I'm saying that I think that virtually all experts would view that as a false dichotomy. This would seem to require some sort of poll. If I get time to look for one, and find it, then I'll let you know the results.

Of course, there are other views of nonlocality. I think that the term, quantum nonlocality, doesn't mean 'action-at-a-distance' to most physicists. It refers to a certain formalization of certain experimental situations, and the symbolic manipulations entailed by qm. In other words, quantum nonlocality has no particular physical meaning apart from the formalism and the experiments to which it's applied -- ie., it isn't telling us anything about the existence or nature of a reality underlying instrumental behavior.

Local realism refers to the assumption that there is an objective (though unknown) reality underlying instrumental behavior, and that it's evolving in accordance with the principle of local causality. EPR's elements of reality, as defined wrt the specific experimental situation they were considering, represent a special case and subset of local realism.

There are models of entanglement which are, ostensibly, local, but not realistic, or realistic, but not local, or, both local and realistic, which reproduce the qm predictions.
 
  • #965
ThomasT said:
As you're well aware, there are many physicists quite familiar with Bell's work who don't agree with your statement of the choices entailed by violations of BIs.

You're saying that the "statement that virtually all experts endorse" is the dichotomy that nature is either nonlocal (taking, in keeping with the theme of this thread, the term 'nonlocality' to mean 'action-at-a-distance') or that there is no nature independent of observations. I'm saying that I think that virtually all experts would view that as a false dichotomy. This would seem to require some sort of poll.

JesseM indicated correctly. I don't know of a physicist in the field (other than a small group like Santos, Hess, Philipp, etc.) that does NOT agree with JesseM's assessment. Certainly you won't find any mention of dissent on this point in a textbook on the subject. I have given repeated references to roundups on the subject, including yesterday, which makes this clear. In light of JesseM's statement to you, he is politely asking you to quit acting as if your minority view is more widely accepted than it is. It confuses readers like JenniT and others.

You may consider it a "false dichotomy"; but as Maaneli is fond of pointing out, you don't have to take it as a dichotomy at all! You can take it as ONE thing as a whole too: local causality is rejected. That is a complete rejection of your position regardless.

A wise person would have no issue with being a bit more humble. You can express yourself without acting like you know it all. I appreciate that after reviewing the case for Bell/Bell tests, you reject the work of thousands of physicists because of your gut feel on the matter. But that is not something to brag about.
 
  • #966
my_wan said:
JesseM said:
I define "local realism" to mean that facts about the complete physical state of any region of spacetime can be broken down into a sum of local facts about the state of individual points in spacetime in that region (like the electromagnetic field vector at each point in classical electromagnetism), and that each point can only be causally influenced by other points in its past light cone. Do you think that this is too "narrowly defined" or that EPR would have adopted a broader definition where the above wasn't necessarily true? (if so, can you provide a relevant quote from them?) Or alternatively, do you think that Bell's derivation of the Bell inequalities requires a narrower definition than the one I've just given?
Ok, that works. But I got no response on what effects the non-commutativity of vector products, even classical vectors, has on the computational demands of modeling BI violations. If these elements are transfinite, what role might Hilbert's paradox of the Grand Hotel play in such effects?
But Bell's proof is abstract and mathematical, it doesn't depend on whether it is possible to simulate a given hidden variables theory computationally, so why does it matter what the "computational demands of modeling BI violations" are? I also don't understand your point about a transfinite set of hidden variables and Hilbert's Hotel paradox...do you think there is some specific step in the proof that depends on whether lambda stands for a finite or transfinite number of facts, or that would be called into question if we assumed it was transfinite?
my_wan said:
EPR correlations are certainly not unique in requiring relative offset verses absolute coordinate values. SR is predicated on it. If observables are projections from a space with an entirely different metric, which doesn't commute with a linear metric of the space we measure, that could impart computational difficulties which BI doesn't recognize.
I'm not sure what you mean by "projections from a space"...my definition of local realism above was defined in terms of points in our observable spacetime, if an event A outside the past light cone of event B can nevertheless have a causal effect on B then the theory is not local realist theory in our spacetime according to my definition, even if the values of variables at A and B are actually "projections" from a different unseen space where A is in the past light cone of B (is that something like what you meant?)
my_wan said:
I didn't get any response involving Maxwell's equations either.
Response to which question?
my_wan said:
Physics is contingent upon operational, not philosophical, claims. What did the original EPR paper say about it?
1) "far from exhausting all possible ways to recognize physical reality"
2) "Regarded not as a necessary, but merely as a sufficient, condition of reality"
3) A comprehensive definition of reality is, however, unnecessary for our purposes"

In other words they chose an definition that had an operational value in making the point more concise, not a definition which defined reality itself.
They did make the claim that there should in certain circumstances be multiple elements of reality corresponding to different possible measurements even when it is not operationally possible to measure them all simultaneously, didn't they?
my_wan said:
True, technical definitions confuse the uninitiated quiet often. Locality is one of them, which is predicated on relativity. Thus it's in principle possible to violate locality without violating realism, as the stated EPR consequences recognizes.
Sure, Bohmian mechanics would usually be taken as an example of this.
my_wan said:
Yet if "realism" is technically predicated on the operational definition provided by EPR, why reject out of hand definitions counter to that EPR provided as a source of research?
I don't follow, what "definitions counter to that EPR provided" are being rejected out of hand?
JesseM said:
Not sure what you mean. Certainly there's no need to assume, for example, that when you measure different particle's "spins" by seeing which way they are deflected in a Stern-Gerlach device, you are simply measuring a pre-existing property which each particle has before measurement (so each particle was already either spin-up or spin-down on the axis you measure).
my_wan said:
Unless observable are a linear projection from a space which has a non-linear mapping to our measured space of variables, to name just one.
What's the statement of mine you're saying "unless" to? I said "there's no need to assume ... you are simply measuring a pre-existing property which each particle has before measurement", not that this was an assumption I made. Did you misunderstand the structure of that sentence, or are you actually saying that if "observable are a linear projection from a space which has a non-linear mapping to our measured space of variables", then that would mean my statement is wrong and that there is a need to assume we are measuring pre-existing properties the particle has before measurement?
JesseM said:
Don't know what you mean by that either. Any local physical fact can be defined in a way that doesn't depend on a choice of coordinate system, no?
my_wan said:
Yes, assuming the variables required are algorithmically compressible or finite.
Why would infinite or non-compressible physical facts be exceptions to that? Note that when I said "can be defined" I just meant that a coordinate-independent description would be theoretically possible, not that this description would involve a finite set of characters that could be written down in practice by a human. For example, there might be some local variable that could take any real number between 0 and 1 as a value, all I meant was that the value (known by God, say) wouldn't depend on a choice of coordinate system.
my_wan said:
I never got this answer either: what does in mean when you can model a rotation of the beam in an EPR model and maintain BI violations while individual photon paths vary, yet the apparently physical equivalent of uniformly rotating the pair of detectors destroys it? Are they not physically the equivalent transform?
As you rotate the direction of the beams, are you also rotating the positions of the detectors so that they always lie in the path of the beams and have the same relative angle between their orientation and the beam? If so this doesn't really seem physically equivalent to rotating the detectors, since their the relative angle between the detector orientation and the beam would change.
my_wan said:
If that is your definition of local realism, fine. But you can't make claims that your definition precludes alternatives, or that alternatives are precluded by your definition. You want a broader "technical" definition of realism? It's not mine, it came from exactly the same source as yours did. "An element of reality that exist independent of any measurement". That's it.
But that's just realism, it doesn't cover locality (Bohmian mechanics would match that notion of realism for example). I think adding locality forces you to conclude that each basic element of reality is associated with a single point in spacetime, and is causally affected only by things in its own past light cone.
 
  • #967
JesseM said:
It (the nonviability of Bell's 2) implies the falsity of local realism, which means if you are a realist who believes in an objective universe independent of our measurements, and you don't believe in any of the "weird" options like parallel worlds or "conspiracies", your only remaining option is nonlocality/ftl.
ThomasT said:
I think this is a false dichotomy which is recognized as such by mainstream physicists. Otherwise, why wouldn't all physicists familiar with Bell's work believe that nature is nonlocal (the alternative being that nature simply doesn't exist independent of our measurements)?
Many physicists have a basically positivist attitude and don't think it's worth talking about questions that aren't experimentally testable (which by definition includes any questions about what's going on with quantum systems when we aren't measuring them). As I noted though, even if you do want to take a "realist" attitude towards QM, there are a few other "weird" options which allow you to avoid FTL, like the many-worlds interpretation (which is actually very popular among physicists who have opinions about the 'interpretation' of QM), or possibly some form of backwards causality which allows for violations of the no-conspiracy assumption (because the later choice of detector settings can have a backwards influence on the probability the source emits particles with different values of hidden variables). So most realist physicists would probably consider it an open question whether nature takes one of these other "weird" options as opposed to the "weird" option of FTL/nonlocal influences between particles. Either way, I think virtually every mainstream physicist would agree the non-"weird" option of local realism is incompatible with QM theoretically, and can be pretty safely ruled out based on experiments done so far even if none has been completely perfect.
ThomasT said:
You've said that Bell's(2) isn't about entanglement. Then how can it's falsification be telling us anything about the nature of entanglement (such as that entangled disturbances are communicating nonlocally)?
Because it's about constraints on the statistics in experiments which meet certain experimental conditions, given the theoretical assumption of local realism--since QM's predictions about entanglement say that these statistical constraints will be violated in experiments meeting those same specified experimental conditions, that shows that QM and local realism are incompatible with one another.
ThomasT said:
And, if it isn't a correct model of the underlying reality, which is one way of looking at it, then how can it's falsification be telling us that an underlying reality doesn't exist?
Because it's a general model of any possible theory that would qualify as "local realist" as physicists understand the term, which I take as basically equivalent to the definition I gave my_wan:
I define "local realism" to mean that facts about the complete physical state of any region of spacetime can be broken down into a sum of local facts about the state of individual points in spacetime in that region (like the electromagnetic field vector at each point in classical electromagnetism), and that each point can only be causally influenced by other points in its past light cone.
So, a falsification of the predictions of this general model constitutes a falsification of "local realism" as in my definition above.
ThomasT said:
As you're well aware, there are many physicists quite familiar with Bell's work who don't agree with your statement of the choices entailed by violations of BIs.
"Many" who aren't regarded as crackpots by the mainstream community? (i.e. not someone like Kracklauer who would fit 't Hooft's description of a bad theoretical physicist very well) If so, can you give some examples? DrChinese, who's a lot more familiar with the literature on this subject than I, said:
I don't know of a physicist in the field (other than a small group like Santos, Hess, Philipp, etc.) that does NOT agree with JesseM's assessment. Certainly you won't find any mention of dissent on this point in a textbook on the subject. I have given repeated references to roundups on the subject, including yesterday, which makes this clear.
ThomasT said:
If, as has been suggested, a majority of physicists think that nature is nonlocal
I don't necessarily think a majority would endorse that positive conclusion, for the reasons I gave above. But virtually everyone would agree local realism can be ruled out, aside from a few "weird" variants like the ones I mentioned involving violations of various conditions that appear in rigorous versions of Bell's argument (like the no-conspiracy condition).
ThomasT said:
I respectfully have to reject your assessment of the meaning of Bell's(2)
You "reject" it without being willing to engage with my specific arguments as to why it's implied by local realism, like the one about past light cones in post #63 here which I've directed you to a few times, and also without being willing to answer my detailed questions about your claims to have an alternative model involving polarization vectors. This doesn't sound like the attitude of an open-minded inquirer into truth, but rather someone with an axe to grind against Bell based on gut feelings that there must be some flaw in the argument even if you can't quite pinpoint what it is.
ThomasT said:
and violations of BIs based on it, and your assessment of the mainstream view on this. My guess is that most physicists familiar enough with BIs to make an informed assessment of their physical meaning do not think that their violation implies either that nature is nonlocal or that there's no reality independent of measurements.
As I said, there are other options besides "nature is nonlocal" or "no reality independent of measurements", including both the popular many-worlds interpretation and the even more popular positivist attitude of not caring about any questions that don't concern measurements (or at least not thinking them subjects for science).
ThomasT said:
You're saying that the "statement that virtually all experts endorse" is the dichotomy that nature is either nonlocal (taking, in keeping with the theme of this thread, the term 'nonlocality' to mean 'action-at-a-distance') or that there is no nature independent of observations.
No, I had already mentioned other "weird" options like parallel universes or violations of the no-conspiracy condition in previous posts to you.
ThomasT said:
Local realism refers to the assumption that there is an objective (though unknown) reality underlying instrumental behavior, and that it's evolving in accordance with the principle of local causality. EPR's elements of reality, as defined wrt the specific experimental situation they were considering, represent a special case and subset of local realism.
Your definition of "local realism" seems to match the one I gave to my_wan, and Bell's proof is broad enough to cover all possible theories that would be local realist in this sense.
ThomasT said:
There are models of entanglement which are, ostensibly, local, but not realistic, or realistic, but not local, or, both local and realistic, which reproduce the qm predictions.
There are no "models of entanglement which are ... both local and realistic, which reproduce the qm predictions", at least not ones which match the other conditions in Bell's proof like each measurement having a unique outcome (no parallel universes) and no "conspiracies" creating correlations between random choice of detector settings and prior values of hidden variables (and again, his equation (2) is not an independent condition, it follows logically from the other conditions). If you disagree, please point to one!
 
  • #968
JesseM said:
Are you saying that Leggett and Garg themselves claimed that their inequality should apply to situations where the three values a,b,c don't represent times of measurement, including the scenario with doctors collecting data on patients from different countries?
Your changing argument against this counter-example, has been mostly dismissive

First you tried to suggest that the inequality I provided was not the same as the one of Leggett and Garg, when a simple check of the LG original article should have revealed it right there. Then you tried suggesting that the inequality does not apply to the counter example I presented, pointing to an appendix of an unpublished thesis (and we are not even sure if the guy passed) as evidence to support your claim.

All along, you make no effort to actually understand what I am telling you. And this is the pattern with your responses. As soon as you see a word in an opposing post, you immediately think you know what the point is and you reproduce your pre-canned recipes of counter-arguments without making an effort to understand the specific opposing argument being made. And your recent diatribe about a previous discussion on PCC shows the same, combined with selective memory of those discussions which are in the open for anyone to read. The following analogy summarizes your approach.

Person1: " 1 apple + 1 orange is not equivalent to 2 pears"
JesseM: "1 + 1 = 2, I can prove it ... <insert 5 pages of extensive text and proofs> ... Do you disagree?"
Person1: "Your response is irrelevant to the issue"
JesseM: "Are you going to answer my question or not?
Person1: <ignores JesseM>
JesseM: <50 posts and 10 threads later> "The fact that you refused to respond to my question in post <##> shows that you are only interested in rhetoric"

Now back to the subject of LGI, I have repeatedly told you it doesn't matter what a,b,c are, any inequalities of that mathematical form will be violated if the data being compared to the inequalities are not correctly indexed to maintain the cyclicity. I have very clearly explained this numerous times. Don't you realize it is irrelevant to my argument to then try to prove to me that Leggett and Garg used ONLY time to derive their inequalities? Just because LG used time to arrive at their inequalities does not mean correctly indexing the data is not required. I have given you a reference to an article by Boole more than century ago in which he derived similar inequalities using just boolean algebra without any regard to time, yet you complain that the article is too long and you don't like Boole's notation. The language may be dated but the notation quite clear, if you actually read the text to find out what the symbols mean. Well, here is a simplified derivation using familiar symbols so that there can be no escape from the fact that such inequalities can be derived from a purely mathematical basis:

Define a boolean variable v such that v = 0,1 and [tex]\overline{v} = 1 - v[/tex]
Now consider three such boolean variables x, y, z which can occur together in any experiment

It therefore follows that:
[tex]1 = \overline{xyz}+x\overline{yz}+x\overline{y}z+\overline{x}y\overline{z}+xy\overline{z}+\overline{xy}z+\overline{x}yz + xyz[/tex]

We can then group the terms as follows so that each group in parentheses can be reduced to products of only two variables.

[tex]1 = \overline{xyz}+(x\overline{yz}+x\overline{y}z)+(\overline{x}y\overline{z}+xy\overline{z})+(\overline{xy}z+\overline{x}yz) + xyz[/tex]

Performing the reduction, we obtain:
[tex]1 = \overline{xyz}+(x\overline{y})+(y\overline{z})+(\overline{x}z) + xyz[/tex]

Which can be rearranged as:
[tex]x\overline{y}+y\overline{z}+\overline{x}z = 1 - (\overline{xyz} + xyz)[/tex]

But since the last two terms on the RHS are either 0 or 1, you can write the following inequality:
[tex]x\overline{y}+y\overline{z}+\overline{x}z \leq 1[/tex]

This is Boole's inequality and you can find similar ones on pages 230 and 231 of Boole's article.
In Bell-type situations, we are interested not in boolean variables of possible values (0,1) but in variables with values (+1, -1) so we can define three such variables a, b, c wheret a = 2x - 1 , b = 2y - 1 and c = 2z -1

Remembering that [tex]\overline{x} = 1 - x[/tex], and substituting in the above inequality maintaining on the LHS only terms involving products of pairs, you obtain the following inequality

[tex]-ab - ac - bc \leq 1[/tex]

from which you can obtain the following inequality by replacing a with -a.

[tex]ab + ac - bc \leq 1[/tex]

These two inequalities can be combined into the form

[tex]|ab + ac| \leq 1 + bc[/tex]

Which is essentially Bell's inequality. If you doubt this result, you can try doing the math yourself and confirm that this is valid. Note that we have derived this from simply by assuming that we have three dichotomous variables occurring together from which we extract products of pairs, using simple algebra without any assumptions about time, locality, non-invasiveness, past light-cones or even hidden variables etc. Therefore their violation by data does not mean anything other than a mathematical problem with the way the data is treated. The counter-example I presented shows this very clearly, that is why when you keep focusing on "time", or "non-invasiveness", thinking that it addresses the issue, I do not take you seriously. So try and understand the opposing argument before you attempt countering it.
 
Last edited:
  • #969
JesseM said:
A violation of the inequalities by data which doesn't match the conditions Bell and Leggett-Garg and Boole assumed when deriving them doesn't indicate a flaw in reasoning which says the inequalities should hold if the conditions are met.

Here again you are arguing that 1 + 1 = 2. Completely ignoring the point, which simply stated is this:
"Violation of the inequalities derived by using a series of assumptions (Ai, i=1,2,3,...,n) by data, means ONLY that one or more of the assumptions (Ai, i=1,2,3,...,n) is false!"
If A1 = "Locality", and you conclude that violation of the inequality implies non-locality, you are being intellectually dishonest, because you know very well that failure of any of the other assumptions can lead to violations even if the locality assumption is true. This is the whole point of the discussion! Again, if you were actually trying to understand my argument, you would have realized this a while ago.
If you insist that the inequalities were derived precisely to describe the Aspect-type experimental situation, as you have sometimes claimed previously, then I will argue that the inequalities are flawed because for the numerous reasons presented here and well recognized in the mainstream, no single experiment has yet satisfied all the assumptions inherent in their derivation. However, if you insist that the inequalities only apply to some ideal experiments which fulfill those assumptions, as I have mentioned many times previously and I doubt anyone here believes otherwise, then those idealized inequalities, however perfect they are, can not be compared to real experiments unless there is independent justification of correspondence between the data from these experiments and the terms within the inequalities. So in case you want to continue to provide proof that 1 + 1 = 2, read this paragraph again and make sure you understand the point.

... from an objective point of view (the point of view of an all-knowing omniscient being)
Again, you are trying to argue that 1 + 1 = 2. How many times will I tell you that experiments are not performed by omniscient observers before it will sink in? You can imagine all you want about an omniscient being, but your imagination will not be comparable to a real experiment by real experimenters.

In the frequentist understanding of probability, this means that in the limit as the number of sample pairs goes to infinity, the frequency at which any given triplet (or any given ordered pair of triplets if the two members of the sample pair are taken from different triplets) is associated with samples of type AaAb should be the same as the frequency at which the same triplet is associated with samples of type AaAc and AbAc,
...
this isn't a case where each sample is taken from a "data point" consisting of triplet of objective (hidden) facts about a,b,c, such that the probability distribution on triplets for a sample pair AaAb is the same as the probability distribution on triplets for the other two sample pairs AaAc and AbAc. In the frequentist understanding of probability, this means that in the limit as the number of sample pairs goes to infinity, the frequency at which any given triplet (or any given ordered pair of triplets if the two members of the sample pair are taken from different triplets) is associated with samples of type AaAb should be the same as the frequency at which the same triplet is associated with samples of type AaAc and AbAc. If the "noninvasive measurability" criterion is met in a Leggett-Garg test, this should be true of the measurements at different pairs of times of SQUIDS if local realism is true. Likewise, if the no-conspiracy condition is true in a test of the form Bell discussed in his original paper, this should also be true if local realism is true.
Are you making a point by this? You just seem to be rehashing here, exactly what is already mentioned in the paper, the fact that to a non-omniscient being without knowledge of all the factors in play, A1a is not different from Aa, which is precisely why the inequality is violated. So it is unclear what your point is.

4) You do not deny the fact that in the example, there is no way to ensure the data is correctly indexed unless all relevant parameters are known by the experimenters

I would deny that, at least in the limit as the number of data points becomes very large. In this case they could could just pool all their data, and use a random process (like a coinflip) to decide whether each Aa should be put in a pair with an Ab data point or an Ac data point, and similarly for the other two.
This is why I asked you to read the paper in full, because you do not know what you are talking about here. The experimenters did not suspect that the location of the test was an important factor so their data was not indexed for location. That means, they do not have anything data point such as A1a(n). All they have is Aa(n). So I'm not sure what you mean by the underlined text. Also note that they are calculating averages of all their data, so I'm not sure why you would think randomly selecting them will make a difference.

Imagine having a bit-mapped image, and you want to extract pixels from it, randomly. For each pixel you you record down a dataset of triples of properties (x position, y position, and color). From the final dataset of triples, you can reconstruct the image. Now instead of collecting one dataset of triples, you collect two datasets of pairs (x, y) and (y, color), what you are suggesting here is similar to the idea that you can still generate the image by randomly deciding which pair from the first dataset, should be matched with a pair from the second data set!

5) You do not deny that Bell's inequalities involve pairs from a set of triples (a,b,c) and yet experiments involve triples from a set of pairs.
I certainly deny this too, in fact I don't know what you can be talking about here.

In Bell's treatment the terms a, b, c represent a triple of angles for which it is assumed that a specific particle, will have values for specific hidden elements of reality. The general idea which DrC and yourself have mentioned several times, usually goes like this "the particle has a specific polarization/spin for those different settings which exists before any measurement is made" and you have often called this "the realism assumption". So according to Bell, for each pair of particles under consideration, at least in the context of Bell's inequalities, there are three properties corresponding to (a,b,c). From these, Bell derives the inequality of the form
1 + E(b,c) >= |E(a,b) - E(a,c)|
Clearly, each term in the inequality involves a pair extracted from the triple (a,b,c). You could say the inequality involves a triple of pairs extracted from an ideal dataset of triples. In an actual experiment, we have ONLY two stations, so we can only have two settings at a time. Experimenters then collect a dataset which involves just pairs of settings. Therefore, to generate terms for the above inequalities from the data, the triple of pairs will have to be extracted from a dataset of pairs. Bell proponents think it is legitimate to substitute pairs extracted from a dataset of triples with pairs extracted from a dataset of pairs. (Compare with the image analogy above)

1) You do not deny that it is impossible to measure triples in any EPR-type experiment, therefore Bell-type inequalities do not apply to those experiments.
This one is so obviously silly you really should know better. The Bell-type inequalities are based on the theoretical assumption that on each trial there is a λ which either predetermines a definition outcome for each of the three detector settings (like the 'hidden fruits' that are assumed to be behind each box on my scratch lotto analogy) ...
Another example of answering without understanding the point you are arguing against. First, I have already pointed out to you that you can not compare an idealized theoretical construct with an actual experiment unless you can demonstrate that the terms in your idealized theoretical construct, correspond to elements in the experiment. Secondly, I have explained why the fact that Aspect-type experiments only produce pairs of data points is a problem for anyone trying to compare those experiments with Bell inequalities. So, rather than throwing insults, if you know of an experiment in which a specific pair of entangled particles are measured at three different angles (a,b,c), then point it out.

I don't know what you mean by "Rij".
Try to derive the inequalities I derived above using the three variables but for which only two can occur together in any experiment. It can not be done. This demonstrates conclusively that you can not substitute a triplet of pairs extracted from a dataset of pairs, into an inequality involving a triplet of pairs extracted from a dataset of triples. (see the image analogy above)
 
  • #970
billschnieder said:
In Bell's treatment the terms a, b, c represent a triple of angles for which it is assumed that a specific particle, will have values for specific hidden elements of reality. The general idea which DrC and yourself have mentioned several times, usually goes like this "the particle has a specific polarization/spin for those different settings which exists before any measurement is made" and you have often called this "the realism assumption". So according to Bell, for each pair of particles under consideration, at least in the context of Bell's inequalities, there are three properties corresponding to (a,b,c). From these, Bell derives the inequality of the form

1 + E(b,c) >= |E(a,b) - E(a,c)|

Clearly, each term in the inequality involves a pair extracted from the triple (a,b,c). You could say the inequality involves a triple of pairs extracted from an ideal dataset of triples. In an actual experiment, we have ONLY two stations, so we can only have two settings at a time. Experimenters then collect a dataset which involves just pairs of settings. Therefore, to generate terms for the above inequalities from the data, the triple of pairs will have to be extracted from a dataset of pairs. Bell proponents think it is legitimate to substitute pairs extracted from a dataset of triples with pairs extracted from a dataset of pairs. (Compare with the image analogy above)

Yes, I think that is a fair assessment of some of the key ideas of Bell. I think it is well understood that there are some sampling issues but that for the most part, they change little. Again, I realize you think sampling is a big "loophole" but few others do.

The fact that doesn't change, no matter how you cut it, is the one item I keep bringing up: It is not possible to derive a dataset for ONE sample of particles that provides consistency with QM statistics. In other words, forget entangled pairs... that is merely a device to test the underlying core issue. Once you accept that no such dataset is possible, which I know you do, then really the entire local realistic house of cards comes down. I know you don't accept that conclusion, but that is it for most everyone else.
 
  • #971
DrChinese said:
The fact that doesn't change, no matter how you cut it, is the one item I keep bringing up: It is not possible to derive a dataset for ONE sample of particles that provides consistency with QM statistics. In other words, forget entangled pairs... that is merely a device to test the underlying core issue. Once you accept that no such dataset is possible, which I know you do, then really the entire local realistic house of cards comes down. I know you don't accept that conclusion, but that is it for most everyone else.

The QM statistics are predicting precisely the outcome of those experiments, the experiments agree with QM, so the data from those experiments is already a dataset which agrees with QM, what more do you want. You will have to define precisely the experiment you want us to produce a dataset for and also provide the QM prediction for the specific experiment you describe. Asking that we produce a dataset from one type of experiment (which can never actually be performed), which matches the predictions QM gives for another type of experiment, will not be serious.
 
  • #972
DrChinese said:
They are often used differently in different contexts. The key is to ask: what pairs am I attempting to collect? Did I collect all of those pairs? Once I collect them, was I able to deliver them to the beam splitter? Of those photons going through the beam splitter, what % were detected? By analyzing carefully, the experimenter can often evaluate these questions. In state of the art Bell tests, these can be important - but not always. Each test is a little different. For example, if fair sampling is assumed then strict evaluation of visibility may not be important. But if you are testing the fair sampling assumption as part of the experiment, it would be an important factor.
Wrong. You are confusing visibility with detection efficiency.
Visibility is roughly speaking signal/noise ratio. If visibility is too low then you don't violate Bell inequalities (or CHSH) even assuming fair sampling.
So visibility is always important.

DrChinese said:
Clearly, the % of cases where there is a blip at Alice's station but not Bob's (and vice versa) is a critical piece of information where fair sampling is concerned. If you subtract that from 100%, you get a number. I believe this is what is referred to as visibility by Zeilinger but honestly it is not always clear to me from the literature. Sometimes this may be called detection efficiency. At any rate, there are several distinct issues involved.
You might confuse (correlation) visibility with detection efficiency but there is absolutely no reason to assume that authors of the paper have such confusion.

DrChinese said:
Keep in mind that for PDC pairs, the geometric angle of the collection equipment is critical. Ideally, you want to get as many entangled pairs as possible and as few unentangled as possible. If alignment is not correct, you will miss entangled pairs. You may even mix in some unentangled pairs (which will reduce your results from the theoretical max violation of a BI). There is something of a border at which getting more entangled is offset by getting too many more unentangled. So it is a balancing act.
This concerns visibility. But to have high coincidence rate we should have high coupling efficiency and for that we should look at coupled photons versus uncoupled (single) photons (as opposed to entangled versus unentangled pairs).
If we observe high coincidence rate in result we certainly have high detection efficiency and high coupling efficiency. But of course we can have high detection efficiency but low coupling efficiency because of poor configuration of source and in that case there is no use from high detection efficiency because coincidence rate will be low anyways.
 
  • #973
JesseM said:
See for example this paper and this one...the discussion seems fairly specific.
Well you see the problem here is that authors of these papers assume that detector efficiency is the only obstacle toward eliminating detection loophole.
But if you look at actual experiments the picture seems a bit different. There does not seem to be any improvement in coincidence detection rate for full setup when you use detectors with high efficiency. Coincidence detection rate is still around 10% for experiments with high coincidence visibility.
The crucial part in this is another peace of equipment that is used in experiments. There are frequency interference filters between PBS and detectors. If you remove them you increase coincidence detection rate but reduce visibility for measurements in +45°/-45° base.
And there are no suggestions how you could get rid of them (or move to another place) while preserving high visibility.

So there does not seem to be clear road toward loophole free experiments.
And my position is that there won't be such experiments in a year or ten years or ever.
 
  • #974
zonde said:
1. Well you see the problem here is that authors of these papers assume that detector efficiency is the only obstacle toward eliminating detection loophole.
But if you look at actual experiments the picture seems a bit different. There does not seem to be any improvement in coincidence detection rate for full setup when you use detectors with high efficiency. Coincidence detection rate is still around 10% for experiments with high coincidence visibility.

2. So there does not seem to be clear road toward loophole free experiments.
And my position is that there won't be such experiments in a year or ten years or ever.

1. I don't necessarily doubt the 10% figure, I just can't locate a reference that clearly states this. And I have looked. The number I am trying to find is:

Alice where there is a matching Bob / Total Alice
(and same for Bob)

To me, that leads to what I think of as visibility. That is probably not the right label, but I have had a difficult time getting a clear picture of how this is calculated and presented.


2. Although the so-called "loophole-free" experiments are scientifically desirable, their absence is not even close to meaning much at all. You are welcome to wait for that, for virtually everyone else the existing evidence is overwhelming. Local realism has failed every single test devised to date (when compared to QM). And that is quite a few.
 
  • #975
zonde said:
But if you look at actual experiments the picture seems a bit different. There does not seem to be any improvement in coincidence detection rate for full setup when you use detectors with high efficiency. Coincidence detection rate is still around 10% for experiments with high coincidence visibility.
This may be true if you're talking about experiments with pairs of entangled photons, but other types of entanglement experiments have been performed where the detection efficiency was close to 100%, although these experiments are vulnerable to the locality loophole. See here and here for example. If you look at the papers proposing loophole-free experiments that I gave you links to earlier, the proposals are also ones that don't involve photon pairs but rather other types of entangled systems.
 
  • #976
DrChinese said:
Certainly you won't find any mention of dissent on this point in a textbook on the subject.
The textbook that I learned qm from didn't say anything about nature being nonlocal.

DrChinese said:
In light of JesseM's statement to you, he is politely asking you to quit acting as if your minority view is more widely accepted than it is.
My view is that Bell doesn't require me to assume that nature is nonlocal, which JesseM seems to indicate might well be the majority view:

JesseM said:
I don't necessarily think a majority would endorse that positive conclusion (that nature is nonlocal).

DrChinese said:
It confuses readers like JenniT and others.
I don't think that my simplistic, feeble minded observations, questions or assertions (note the intellectual humility) could possibly confuse anyone -- and certainly not JenniT. Your stuff, on the other hand, is either very deep or very confused. Either way I still have the utmost respect for your and JesseM's , and anyone else's for that matter, attempts to enlighten me wrt Bell-related stuff. If my 'know it all' style is sometimes annoying, then at least that part of my program is successful. Just kidding. Try to block that out and only focus on what I'm saying, or what you think I'm trying to say. The bottom line is that I really don't feel that I fully understand it. Am I alone in this? I don't think so. Anyway, we have these wonderful few threads here at PF actively dealing with Bell's stuff, and for the moment I'm in a sort of philosophy/physics Hillbilly Heaven of considerations of Bell's theorem. Not that the stuff in the thread(s) is (necessarily) all that profound, and not that I would know anyway (more intellectual humility), but that it's motivating me (and I'll bet others too) to research this in ways that I (and they) probably wouldn't take the time to do otherwise (without these threads).

DrChinese said:
You may consider it a "false dichotomy"; but as Maaneli is fond of pointing out, you don't have to take it as a dichotomy at all! You can take it as ONE thing as a whole too: local causality is rejected. That is a complete rejection of your position regardless.
Ok, it's not a dichotomy. Then the nonlocality of nature is the inescapable conclusion following Bell. So why isn't this the general paradigm of physics? Why isn't this taught in physics classes? Why, as JesseM says he thinks, and as I would agree with, don't a majority of physicists endorse the conclusion that nature is nonlocal? Why bother with any 'mediating' physics at all if Bell has shown this to be impossible?

DrChinese said:
A wise person would have no issue with being a bit more humble.
But I am humble. See above. And wise. See below.

DrChinese said:
You can express yourself without acting like you know it all. I appreciate that after reviewing the case for Bell/Bell tests, you reject the work of thousands of physicists because of your gut feel on the matter. But that is not something to brag about.
I have the gut feeling that you might be exaggerating. Am I wrong? (Would 'hundreds of physicists' be a closer estimate? Or, maybe, 87?)

By the way DrC (and others), I'm going to be out blowing stuff up with various explosives and lighting things on fire with various lenses in commemoration of our independence or whatever. Plus lots of hotdogs with jalapenos, cheese and mustard -- and beer! HAPPY 4TH OF JULY!
 
  • #977
JesseM said:
I define "local realism" to mean that facts about the complete physical state of any region of spacetime can be broken down into a sum of local facts about the state of individual points in spacetime in that region (like the electromagnetic field vector at each point in classical electromagnetism), and that each point can only be causally influenced by other points in its past light cone.

ThomasT said:
Local realism refers to the assumption that there is an objective (though unknown) reality underlying instrumental behavior, and that it's evolving in accordance with the principle of local causality. EPR's elements of reality, as defined wrt the specific experimental situation they were considering, represent a special case and subset of local realism.

JesseM said:
Your definition of "local realism" seems to match the one I gave to my_wan, and Bell's proof is broad enough to cover all possible theories that would be local realist in this sense.
That's the question: is Bell's proof broad enough to cover all possible LR theories?

Certainly, I agree with you, and understand why, Bell's theorem, as developed by Bell, disallows any and all LHV or LR theories that conform to Bell's explicit formulation of such theories. That is, such models, conforming to the explicit requirements of Bell, must necessarily be incompatible with qm (and, as has been demonstrated, with experiments). The ONLY question about this, afaik, concerns the generality of Bell's LHV or LR model. In connection with this consideration, LR models of entanglement have been proposed which do reproduce the qm predictions.

JesseM said:
There are no "models of entanglement which are ... both local and realistic, which reproduce the qm predictions", at least not ones which match the other conditions in Bell's proof like each measurement having a unique outcome (no parallel universes) and no "conspiracies" creating correlations between random choice of detector settings and prior values of hidden variables (and again, his equation (2) is not an independent condition, it follows logically from the other conditions). If you disagree, please point to one!
Ok. Here's one, posted in another (Bell) thread by Qubix, which I've taken some time to try to understand. I think it's conceptually equivalent to what I've been saying about the joint experimental context measuring something different than the individual experimental context.

Disproofs of Bell, GHZ, and Hardy Type Theorems and the Illusion of Entanglement
http://uk.arxiv.org/abs/0904.4259

No one has responded to it (in the thread "Bell's mathematical error") except DrC:

DrChinese said:
Christian's work has been rejected. But that is not likely to stop him. He fails test #1 with me: his model is not realistic.
We're waiting for DrC to clarify his 'realism' requirement -- truly a puzzlement in its own right.

Yes, Christian's work (on this) has been 'rejected'. However, the supposed rebuttals have themselves been rebutted. As it stands now, there has been little or no interest in Christian's work, afaik, on Bell's theorem for about 3 years. Much like Bell's first (famous) paper in the 3 years following it's publication.

The abstract:
An elementary topological error in Bell's representation of the EPR elements of reality is identified. Once recognized, it leads to a topologically correct local-realistic framework that provides exact, deterministic, and local underpinning of at least the Bell, GHZ-3, GHZ-4, and Hardy states. The correlations exhibited by these states are shown to be exactly the classical correlations among the points of a 3 or 7-sphere, both of which are closed under multiplication, and hence preserve the locality condition of Bell. The alleged non-localities of these states are thus shown to result from misidentified topologies of the EPR elements of reality. When topologies are correctly identified, local-realistic completion of any arbitrary entangled state is always guaranteed in our framework. This vindicates EPR, and entails that quantum entanglement is best understood as an illusion.

And an excerpt from the Introduction:
Hence Bell’s postulate of equation (1) amounts to an implicit assumption of a specific topology for the EPR elements of reality. In what follows, we shall be concerned mainly with the topologies of the spheres S0, S1, S2, S3, and S7, each of which is a set of binary numbers parameterized by Eq. (3), but with very different topologies from one another. Thus, for example, the 1-sphere, S1, is connected and parallelizable, but not simply connected. The spheres S3 and S7, on the other hand, are not only connected and parallelizable, but also simply connected. The crucial point here is that—since the topological properties of different spheres are dramatically different from one another—mistaking the points of one of them for the points of another is a serious error. But that is precisely what Bell has done.

Hopefully, someone is going to actually read Christian's paper and make some knowledgeable comments wrt it's contentions -- rather than simply say that it's been rejected. Afaik, Christian's paper is unrefuted and generally unrecognized.
 
  • #978
ThomasT said:
1. The textbook that I learned qm from didn't say anything about nature being nonlocal.

My view is that Bell doesn't require me to assume that nature is nonlocal, which JesseM seems to indicate might well be the majority view:

2. But I am humble. See above. And wise. See below.

3. By the way DrC (and others), I'm going to be out blowing stuff up with various explosives and lighting things on fire with various lenses in commemoration of our independence or whatever. Plus lots of hotdogs with jalapenos, cheese and mustard -- and beer! HAPPY 4TH OF JULY!

1. There is a big difference in this and what I said. You aren't going to find textbooks promoting local realism, and you know it.

Whether nature is nonlocal or not is not what I am asserting. As I have said till I'm blue, nature might be nonrealistic. Or both. So you are being a bit misleading when you comment as you have.

A NOTE FOR EVERYONE: nonlocal could mean a lot of things. The Bohmian crew has one idea. Nonrealistic could mean a lot of things too. MWIers have an idea about this. But nonlocal could mean other things too - like that wave functions can be nonlocal, or that there are particles that travel FTL. So defining nonlocality still has a speculative element to it. I happen to subscribe to the kind of nonlocality that is consistent with the HUP. So if you think the HUP implies some kind of nonlocality, well, there's the definition. And that makes HUP believers into believers of a kind of nonlocality. I call that quantum nonlocality. And I think that is a fairly widespread belief, although I have nothing specific to back that up.

2. Perhaps your humility is one of your best traits. I know it is one of mine!

3. Have fun. And save a beer for me.
 
  • #979
ThomasT said:
1. We're waiting for DrC to clarify his 'realism' requirement -- truly a puzzlement in its own right.

I can see why it is hard to understand.

a) Fill any a set of hidden variables for angle settings 0, 120 and 240 degrees for a group of hypothetical entangled photons.
b) This should be accompanied by a formula that allows me to deduce whether the photons are H> or V> polarized, based on the values of the HVs.
c) The results should reasonably match the predictions of QM, a 25% coincidence rate, regardless of which 2 different settings I might choose to select. I will make my selections randomly, before I look at your HVs but after you have established their values and the formula.

When Christian shows me this, I'll read more. Not before, as I am quite busy: I must wash my hair tonight.
 
  • #980
DrChinese said:
Whether nature is nonlocal or not is not what I am asserting. As I have said till I'm blue, nature might be nonrealistic. Or both. So you are being a bit misleading when you comment as you have.
If nature is nonrealistic, then it must necessarily be true that quantum correlations are nonlocal. So, as far as I can tell, that's what you're saying, ie., that Bell entails that there is no nature ... nothing ... underlying instrumental phenomena. But how could you, or Bell, or anybody, possibly know that? From a theorem? Maybe you've made a mistake somewhere in your thinking about this!
 

Similar threads

  • Quantum Physics
Replies
4
Views
827
Replies
20
Views
1K
  • Quantum Physics
Replies
2
Views
1K
  • Quantum Physics
Replies
6
Views
1K
  • Quantum Physics
Replies
18
Views
2K
Replies
3
Views
1K
  • Quantum Physics
3
Replies
100
Views
9K
Replies
6
Views
3K
Replies
3
Views
672
  • Quantum Interpretations and Foundations
Replies
3
Views
2K
Back
Top