Is action at a distance possible as envisaged by the EPR Paradox.

  • #951
nismaratwork said:
I can see why Dirac disdained this kind of pondering, which in the end has little or nothing to do with the work of physics and its applications in life.

So true. It would be nice if while debating the placement of punctuation and the definition of words in the language we speak daily, we perhaps reminded ourselves of the importance of predictions and related experiments. Because every day, there are fascinating new experiments involving new forms of entanglement. That would be the same "action at a distance" as envisioned in this thread which some think they have "disproven".

And just to prove that, just check out the following link:

As of this morning, this represented 572 articles on the subject - many theoretical but also many experimental - on entanglement and Bell.

Oh, and that would be so far in 2010. Please folks, get a grip. You don't need to take my word for it. Read about 50 or 100 of these papers, and you will see that these issues are being tackled every day by physicists who wake up thinking about this. And you will also see mixed in many interesting alternative ideas which are out of the mainstream: these articles are not all peer reviewed. Look for a Journal Reference to get those, which tend to be mainstream and higher quality overall. Many experimental results will be peer reviewed.
 
Last edited:
Physics news on Phys.org
  • #952
zonde said:
What has visibility in common with detection efficiency. :bugeye:
Visibility=(coincidence-max - coincidence-min)/(coincidence-max + coincidence-min)
Efficiency=coincidence rate/singlet rate

They are often used differently in different contexts. The key is to ask: what pairs am I attempting to collect? Did I collect all of those pairs? Once I collect them, was I able to deliver them to the beam splitter? Of those photons going through the beam splitter, what % were detected? By analyzing carefully, the experimenter can often evaluate these questions. In state of the art Bell tests, these can be important - but not always. Each test is a little different. For example, if fair sampling is assumed then strict evaluation of visibility may not be important. But if you are testing the fair sampling assumption as part of the experiment, it would be an important factor.

Clearly, the % of cases where there is a blip at Alice's station but not Bob's (and vice versa) is a critical piece of information where fair sampling is concerned. If you subtract that from 100%, you get a number. I believe this is what is referred to as visibility by Zeilinger but honestly it is not always clear to me from the literature. Sometimes this may be called detection efficiency. At any rate, there are several distinct issues involved.

Keep in mind that for PDC pairs, the geometric angle of the collection equipment is critical. Ideally, you want to get as many entangled pairs as possible and as few unentangled as possible. If alignment is not correct, you will miss entangled pairs. You may even mix in some unentangled pairs (which will reduce your results from the theoretical max violation of a BI). There is something of a border at which getting more entangled is offset by getting too many more unentangled. So it is a balancing act.
 
  • #953
ThomasT said:
I understand the proofs of BIs. What I don't understand is why nonlocality or ftl are seriously considered in connection with BI violations and used by some to be synonymous with quantum entanglement.
In defining the argument, the assumptions and consequences had to be enumerated, regardless of how unlikely one or the other potential consequences might be from some point of view. IF something physical actually traverses that space, in the alloted time, to effect the outcomes as measured, realism is saved. Doesn't matter how reasonable or silly it might be, given the "IF" the fact follows, thus must be included in the range of potentials.

JesseM said:
Yes, you don't understand it, but mainstream physicists are in agreement that Bell's equations all follow directly from local realism plus a few minimal assumptions (like no parallel universes, no 'conspiracies' in past conditions that predetermine what choice the experimenter will make on each trial and tailor the earlier hidden variables to those future choices), so why not consider the probability that the problem likely lies with your understanding rather than that of all those physicists for decades?
This is the worst possible argument possible. Almost precisely the same argument my friend, that turned religious, used to try to convert me. It's invalid in any context, no matter how solid the claim it's used to support. I cringe no matter how trivially true such a statement is used to support. So if the majority "don't understand", as you have stated for yourself, acceptance of this argument makes the majority acceptance a self fulfilled prophesy.

JesseM said:
And (2) would necessarily be true in all local realist theories that satisfy those few minimal assumptions. (2) is not in itself a separate assumption, it follows logically from the postulate of local realism.
You call it a postulate of local realism, but fail to mention that this "postulate of local realism" is predicated on a very narrowly defined 'operational' definition, which even its originators (EPR) disavowed it, at the time it was proposed, as the only, sufficiently complete, etc., as a complete definition. It was a definition that I personally rejected at a very young age, before I ever heard of EPR or knew what QM was. Solely on classical grounds, but related to some ideas DrC used to reject Hume realism. Now such silly authority arguments, as provided above, is used to make demands that I drop "realism", because somebody generalized an 'operational' definition that I rejected in my youth, and proved it false. Am I supposed to be in awe of that?

JesseM said:
It implies the falsity of local realism, which means if you are a realist who believes in an objective universe independent of our measurements, and you don't believe in any of the "weird" options like parallel worlds or "conspiracies", your only remaining option is nonlocality/ftl.
Unequivocally false. There are other options, unless you want to insist that one 'operational' definition is by academic definition the only available definition of realism available. Even then it doesn't make you right, you have only chosen a definition to insure you can't be wrong.

There are whole ranges of issues involved. Many of which may have some philosophical content that don't strictly belong in science, unless of course you can formalize it into something useful. Yet the "realism" claim associated with Bell is a philosophical claim, by taking a formalism geared toward a single 'operational' definition and expanding it over the entire philosophical domain of realism. It's a massive composition fallacy.

The composition fallacy runs even deeper. There's the assumption that the things we measure are existential in the sense of things. Even if every possible measurable we are capable of is provably no more real than a coordinate choice, it is NOT proof that things don't exist independent of being measured (the core assumption of realism). Or that a theoretical construct can't build an empirically consistent emergent system based on existential things. Empirical completeness and the completeness of nature is not synonymous. Fundamentally, realism is predicated on measurement independence, and cannot be proved false on the grounds that an act of measurement has effects. If it didn't have effects, measurements would be magical. Likewise, in a strict realist sense, an existential thing, independent variable, which has independent measurable properties is also a claim of magic.

So please, at least qualify local realism with "Einstein realism", "Bell realism", or some other suitable qualifier, so as not to make the absurd excursion into a blanket philosophical claim that the entire range of all forms of "realism" are provably falsified. It turns science into a philosophical joke, whether right or wrong. If this argument is overly philosophical, sorry, that what the blanket claim that BI violations falsifies local realism imposes.
 
Last edited:
  • #954
my_wan said:
It turns science into a philosophical joke, whether right or wrong. If this argument is overly philosophical, sorry, that what the blanket claim that BI violations falsifies local realism imposes.

Does it help if we say that BI violations blanket falsify claims of EPR (or Bell) locality and EPR (or Bell) realism? Because if words are to have meaning at all, this is the case.
 
  • #955
DrChinese said:
So true. It would be nice if while debating the placement of punctuation and the definition of words in the language we speak daily, we perhaps reminded ourselves of the importance of predictions and related experiments. Because every day, there are fascinating new experiments involving new forms of entanglement. That would be the same "action at a distance" as envisioned in this thread which some think they have "disproven".

And just to prove that, just check out the following link:

As of this morning, this represented 572 articles on the subject - many theoretical but also many experimental - on entanglement and Bell.

Oh, and that would be so far in 2010. Please folks, get a grip. You don't need to take my word for it. Read about 50 or 100 of these papers, and you will see that these issues are being tackled every day by physicists who wake up thinking about this. And you will also see mixed in many interesting alternative ideas which are out of the mainstream: these articles are not all peer reviewed. Look for a Journal Reference to get those, which tend to be mainstream and higher quality overall. Many experimental results will be peer reviewed.

I like this approach very much. We should never forget the need for what works, and how it works in the midst of WHY it works.
 
  • #956
my_wan said:
This is the worst possible argument possible. Almost precisely the same argument my friend, that turned religious, used to try to convert me. It's invalid in any context, no matter how solid the claim it's used to support. I cringe no matter how trivially true such a statement is used to support. So if the majority "don't understand", as you have stated for yourself, acceptance of this argument makes the majority acceptance a self fulfilled prophesy.
Huh? I said it was ThomasT who didn't understand Bell's proof, not the majority of physicists. And in technical subjects like science and math, I think it's perfectly valid to say that if some layman doesn't understand the issues very well but is confused about the justification for some statement that virtually all experts endorse, the default position of a layman showing intellectual humility should be that it's more likely the mistake lies with his/her own understanding, rather than taking it as a default that they've probably found a fatal flaw that all the experts have overlooked and proceeding to try to convince others of that. Of course this is just a sociological statement about likelihood that a given layman has actually discovered something groundbreaking, I'm not trying to argue that anyone should take the mainstream position on faith or not bother asking questions about the justification for this position. But if you don't take this advice there's a good chance you'll fall victim to the Dunning-Kruger effect, and perhaps also become the type of "bad theoretical physicist" described by Gerard 't Hooft here.
my_wan said:
You call it a postulate of local realism, but fail to mention that this "postulate of local realism" is predicated on a very narrowly defined 'operational' definition, which even its originators (EPR) disavowed it, at the time it was proposed, as the only, sufficiently complete, etc., as a complete definition.
I define "local realism" to mean that facts about the complete physical state of any region of spacetime can be broken down into a sum of local facts about the state of individual points in spacetime in that region (like the electromagnetic field vector at each point in classical electromagnetism), and that each point can only be causally influenced by other points in its past light cone. Do you think that this is too "narrowly defined" or that EPR would have adopted a broader definition where the above wasn't necessarily true? (if so, can you provide a relevant quote from them?) Or alternatively, do you think that Bell's derivation of the Bell inequalities requires a narrower definition than the one I've just given?
my_wan said:
Unequivocally false. There are other options, unless you want to insist that one 'operational' definition is by academic definition the only available definition of realism available. Even then it doesn't make you right, you have only chosen a definition to insure you can't be wrong.
I don't know what you mean by "operational", my definition doesn't appear to be an operational one but rather an objective description of the way the laws of physics might work. If you do think my definition is too narrow and that there are other options, could you give some details on what a broader definition would look like?
my_wan said:
There are whole ranges of issues involved. Many of which may have some philosophical content that don't strictly belong in science, unless of course you can formalize it into something useful. Yet the "realism" claim associated with Bell is a philosophical claim, by taking a formalism geared toward a single 'operational' definition and expanding it over the entire philosophical domain of realism. It's a massive composition fallacy.
In a scientific/mathematical field it's only meaningful to use terms like "local realism" if you give them some technical definition which may be different than their colloquial meaning or their meaning in nonscientific fields like philosophy. So if a physicist makes a claim about "local realism" being ruled out, it doesn't really make sense to say the claim is a "fallacy" on the basis of the fact that her technical definition doesn't match how you would interpret the meaning of that phrase colloquially or philosophically or whatever. That'd be a bit like saying "it's wrong to define momentum as mass times velocity, since that definition doesn't work for accepted colloquial phrases like 'we need to get some momentum going on this project if we want to finish it by the deadline'".
my_wan said:
The composition fallacy runs even deeper. There's the assumption that the things we measure are existential in the sense of things.
Not sure what you mean. Certainly there's no need to assume, for example, that when you measure different particle's "spins" by seeing which way they are deflected in a Stern-Gerlach device, you are simply measuring a pre-existing property which each particle has before measurement (so each particle was already either spin-up or spin-down on the axis you measure).
my_wan said:
Even if every possible measurable we are capable of is provably no more real than a coordinate choice
Don't know what you mean by that either. Any local physical fact can be defined in a way that doesn't depend on a choice of coordinate system, no?
my_wan said:
it is NOT proof that things don't exist independent of being measured (the core assumption of realism).
Since I don't know what it would mean for "every possible measurable we are capable of is provably no more real than a coordinate choice" to be true, I also don't know why the truth of this statement would be taken as "proof that things don't exist independent of being measured". Are you claiming that any actual physicists argue along these lines? If so, can you give a reference or link?
my_wan said:
Fundamentally, realism is predicated on measurement independence, and cannot be proved false on the grounds that an act of measurement has effects.
I don't see why, nothing about my definition rules out the possibility that the act of measurement might always change the system being measured.
my_wan said:
So please, at least qualify local realism with "Einstein realism", "Bell realism", or some other suitable qualifier, so as not to make the absurd excursion into a blanket philosophical claim that the entire range of all forms of "realism" are provably falsified.
All forms compatible with my definition of local realism are incompatible with QM. I don't know if you would have a broader definition of "local realism" than mine, but regardless, see my point about the basic independence of the technical meaning of terms and their colloquial meaning.
 
Last edited:
  • #957
zonde said:
Interesting. And do those papers suggest at least approximately what kind of experiments they will be?
Or is it just very general idea?
See for example this paper and this one...the discussion seems fairly specific.
zonde said:
Besides if you want to discuss some betting with money you are in the wrong place.
I was just trying to get a sense of whether Bill actually believed himself it was likely that all the confirmation of QM predictions in these experiments would turn out to be a consequence of a local realist theory that was "exploiting" both the detector efficiency loophole and the locality loophole simultaneously, or if he was just scoffing at the fact that experiments haven't closed both loopholes simultaneously for rhetorical purposes (of course there's nothing wrong with pointing out the lack of loophole-free experiments in this sort of discussion, but Bill's triumphant/mocking tone when pointing this out would seem a bit hollow if he didn't actually think such a loophole-exploiting local theory was likely).
 
  • #958
DrChinese said:
Does it help if we say that BI violations blanket falsify claims of EPR (or Bell) locality and EPR (or Bell) realism? Because if words are to have meaning at all, this is the case.
That is in fact the case. BI violations do in fact rule out the very form of realism it was predicated on. "EPR local realism" would be fine, as that contains the source of the operational definition Bell did in fact falsify. Some authors already do this, like Adan Cabello spelled it out as "Einstein-Podolsky-Rosen element of reality" (Phys. Rev. A 67, 032107 (2003)). Perfectly acceptable.

As an aside, I really doubt that any given individual element of "physical" reality, assuming such exist and realism holds, corresponds to any physically measurable quantity. This does not a priori preclude a theoretical construct from successfully formalizes such elements. Note how diametrically opposed this is to the operational definition used by EPR:
“If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity

The original EPR paper gave a more general definition of realism that wasn't so contingent on the operational definition:
"every element of physical reality must have a counterpart in physical theory."
Though this almost certainly implicitly assumed some correspondence I consider more than a little suspect, it doesn't a priori assume an element of physical reality has a direct correspondence with observables. Neither do I consider an empirically complete theory incomplete on the grounds that unobservables may be presumed to exist yet not be defined by the theory. That also opposes Einstein's realism. However, certain issues, such as the vacuum catastrophe, GR + QM, dark matter/energy, etc., is a fairly good if presumptuous indicator of incompleteness.

These presumptions, which opposes the EPR definition, began well before I was a teenager or had any clue about EPR or QM, and was predicated on hard realism. Thus I can't make specific claims, I can only state that blanket philosophical claims of what BI violations falsifies is a highly unwarranted composition fallacy, not that the claim is ultimately false.
 
  • #959
JesseM said:
I was just trying to get a sense of whether Bill actually believed himself it was likely that all the confirmation of QM predictions in these experiments would turn out to be a consequence of a local realist theory that was "exploiting" both the detector efficiency loophole and the locality loophole simultaneously, or if he was just scoffing at the fact that experiments haven't closed both loopholes simultaneously for rhetorical purposes
And as I explained, I do not engage in these discussions for religious purposes, so I'm surprised why you would expect me to bet on. A claim has been made about the non-locality of the universe. I and others, have raised questions about the premises used to supporting that claim. Rather than explain why the premises are true, you expect me rather to bet that the claim is not true. In fact the suggestion is itself maybe suggestive of your approach to these discussions, which I do not consider to be about winning or losing an argument but about understanding the truth of the issues infront of us.

The fact that QM and experiments agree is a big hint that the odd-man out (Bell inequalities) does not model the same thing as QM does, which is what is realized in real experiments. There is no question about this. I think you agree with this. So I'm not sure why you think by repeatedly mentioning the fact that numerous experiments have agreed with QM, it somehow advances your argument. It doesn't. Also the phrase "experimental loopholes" is a misnomer because it gives the false impression that there is something "wrong" with the experiments, such that "better" experiments have to be performed. This is a backward look at it. Every so-called "loophole" is actually a hidden assumption made by Bell in deriving his inequalities.

When I mentioned "assumption" previously, you seemed to express surprise, despite the fact that I have already pointed out to you several times hidden assumptions within Bell's treatment that make it incompatible with Aspect-type experiments. If anyone or more of any assumptions in Bell's treatment are not met in the experiments, Bell's inequalities will not apply. The locality assumption is explicit in Bell's treatment, so Bell's proponents think violation of the inequalities definitely means violation of the locality principle. But there are other hidden assumptions such as:

1) Every photon pair will be detected (due to choice of only +/- as possible outcomes)
2) P(lambda) is equivalent for each of the terms of the inequality
3) Datasets of pairs are extracted from a dataset of triples
4) Non-contextuality
5) ...

And the others I have not mentioned or are yet to be discovered. So whenever you hear about "detection efficiency loophole", the issue really is a failure of hidden assumption (1). And the other example I just gave a few posts back about cyclicity and indexing, involves the failure of (2) and (3).

It is therefore not surprising that some groups have reported on locally causal explanations of many of these Bell-test experiments, again confirming that the problem is in the hidden assumptions used by Bell, not in the experimenters.

(of course there's nothing wrong with pointing out the lack of loophole-free experiments in this sort of discussion, but Bill's triumphant/mocking tone when pointing this out would seem a bit hollow if he didn't actually think such a loophole-exploiting local theory was likely).
I make an effort to explain my point of view, you are free to completely demolish it with legitimate arguments. I will continue to point out the flaws I see in your responses (as long as a relevant response can be descerned from them), and if your arguments are legitimate, I will change my point of view accordingly. But if you can not provide a legimate argument and you think the goal of discussion as one of winning/losing, you may be inclined to interprete my conviction about my point of view to be "triumphant/mocking". But that is just your perspective and you are entitled to it, even if it is false.
 
  • #960
billschnieder said:
It is therefore not surprising that some groups have reported on locally causal explanations of many of these Bell-test experiments, again confirming that the problem is in the hidden assumptions used by Bell, not in the experimenters.

When you say "explanations", I wonder exactly what qualifies as an explanation. The only local realistic model I am aware of is the De Raedt et al model, which is a computer simulation which satisfies Bell. All other local explanations I have seen are not realistic or have been generally refuted (e.g. Christian, etc). And again, by realistic, I mean per the Bell definition (simultaneous elements of reality, settings a, b and c).
 
  • #961
billschnieder said:
As I mentioned to you earlier, it is your opinion here that is wrong.
Are you saying that Leggett and Garg themselves claimed that their inequality should apply to situations where the three values a,b,c don't represent times of measurement, including the scenario with doctors collecting data on patients from different countries? If so, can you quote from the paper since it doesn't seem to be available freely online? Or are you making some broader claim that the reasoning Leggett and Garg used could just as easily be applied to other scenarios, even if they themselves didn't do this?
billschnieder said:
Of course, the LGI applies to the situation you mention, but inequalities of that form were originally proposed by Boole in 1862 (see http://rstl.royalsocietypublishing.org/content/152/225.full.pdf+html) and had nothing to do with time. All that is necessary for it to apply is n-tuples of two valued (+/-) variables. In Boole's case it was three boolean variables. The inequalities result simply from arithmetic, and nothing else.
We perform an experiment in which each data point consists of triples of data such as (i,j,k). Let us call this set S123. We then decide to analyse this data by extracting three data sets of pairs such as S12, S13, S23. What Boole showed was essentially if i, j,k are two valued variables, no matter the type of experiment generating S123, the datasets of pairs extracted from S123 will satisfy the inequalities:

|<S12> +/- <S13>| <= 1 +/- <S23>
The paper by Boole you linked to is rather long and he doesn't seem to use the same notation, can you point me to the page number where he derives an equivalent equation so I can see the discussion leading up to it? I would guess he was assuming that we were picking which pairs to extract from each triple in a random way so that there'd be no possibility of a systematic correlation between our choice of which pair to extract and the values of all members of the triplet S123 (and even if Boole neglected to explicitly mention such an assumption I'd assume you could find later texts on probability which did). And this would be equivalent to the assumption Leggett and Garg made of "noninvasive measurement", that the choice of which times to measure a given particle or system aren't correlated with the probability of different hidden classical histories the particle/system might be following. So if you construct an example where we would expect a correlation between what pair of values are sampled and the underlying facts about the value for all three possibilities, then I expect neither Boole nor Leggett and Garg would finding it surprising or contrary to their own proofs that the inequalities would no longer hold.
billschnieder said:
You can verify that this is Bell's inequality (replace 1,2,3 with a,b,c,).
You mean equation (15) in his original paper? But in his derivations the hidden variable λ can represent conditions that occur before the experimenters make a random choice of detector settings (see p. 242 of Speakable and Unspeakable in Quantum Mechanics, so there's good justification for saying λ should be independent of detector settings, and in any case this is explicitly includes as the "no-conspiracy condition" in rigorous proofs of Bell's theorem.
billschnieder said:
So a violation of these inequalities by data, points to mathematically incorrect treatment of the data.
A violation of the inequalities by data which doesn't match the conditions Bell and Leggett-Garg and Boole assumed when deriving them doesn't indicate a flaw in reasoning which says the inequalities should hold if the conditions are met.
JesseM said:
I also found the paper where you got the example with patients from different countries here,
billschnieder said:
That is why I gave you the reference before, have you read it, all of it?
You mentioned the name of the paper but didn't give a link, when I said I "found" it I just meant I had found an online copy. And no, I didn't read it all the way through, just enough sections that I thought I got the idea of how they thought the scenario with patients from different countries was supposed to be relevant to Leggett-Garg. If there is some particular section you think I should pay more attention to, feel free to point to it.
JesseM said:
To this critique appears to be rather specific to the Leggett-Garg inequality, maybe you could come up with a variation for other inequalities but it isn't obvious to me (I think the 'noninvasive measurements' condition would be most closely analogous to the 'no-conspiracy' condition in usual inequalities, but the 'no-conspiracy' condition is a lot easier to justify in terms of local realism when λ can refer to the state of local variables at some time before the experimenters choose what detector settings to use)
billschnieder said:
This is not a valid criticism for the following reason:

1) You do not deny that the LGI is a Bell-type inequality. Why do you think it is called that?
Because the derivation is closely analogous and the conclusion (that QM is incompatible with certain assumptions about 'hidden' objective facts that determine measurement outcomes) is also quite similar. However, the assumptions in the derivation do differ from the assumptions in other Bell-type proofs even if they are very analogous (like the no-conspiracy assumption being replaced by the noninvasive measurement assumption).
billschnieder said:
2) You have not convincingly argued why the LGI should not apply to the situation described in the example I presented
I don't have access to the original Leggett-Garg paper, but this paper which I linked to before says:
In a paper provocatively entitled "Quantum Mechanics versus Macroscopic Realism: Is the Flux There when Nobody Looks? A. J. Leggett and A. Garg[1] proposed a way to determine whether the magnetic flux of a SQUID (superconducting quantum interference device) was compatible with the postulates:

(A1) Macroscopic Realism: "A macroscopic system with two or more macroscopically distinct states available to it will at all times be in one or the other of these states."

(A2) Noninvasive Measurability: "It is possible, in principle, to determine the state of the system with arbitrary small perturbation on its subsequent dynamics."
So, the quote after (A2) does indicate that they were assuming the condition that the choice of which two measurements to make isn't correlated with the values the system takes at each of the three possible times. An example which is constructed in such a way that there is a correlation between the two sample points and the three values for each data triplet would be one that isn't meeting this condition, and thus there'd be no reason to expect the inequality to hold for it, so it isn't a flaw in the derivation that you can point to such an example.
billschnieder said:
3) You do not deny the fact that in the example I presented, the inequalities can be violated simply based on how the data is indexed.
Unclear what you mean by "simply based on how the data is indexed". In the example, the Ab in AaAb was taken under consistently different observable experimental conditions than the Ab in AbAc; the first Ab always has a superscript 2 indicating a patient from Lyon, the second Ab always has a superscript 1 indicating a patient from Lille. And they also say:
On even dates we have Aa = +1 and Ac = −1 in both cities while Ab = +1 in Lille and Ab = −1 in Lyon. On odd days all signs are reversed.
So, in this case depending on whether you are looking at the data pair AaAb or AbAc on a given date, the value of Ab is different. And even if you don't know the date information, from an objective point of view (the point of view of an all-knowing omniscient being), this isn't a case where each sample is taken from a "data point" consisting of triplet of objective (hidden) facts about a,b,c, such that the probability distribution on triplets for a sample pair AaAb is the same as the probability distribution on triplets for the other two sample pairs AaAc and AbAc. In the frequentist understanding of probability, this means that in the limit as the number of sample pairs goes to infinity, the frequency at which any given triplet (or any given ordered pair of triplets if the two members of the sample pair are taken from different triplets) is associated with samples of type AaAb should be the same as the frequency at which the same triplet is associated with samples of type AaAc and AbAc. If the "noninvasive measurability" criterion is met in a Leggett-Garg test, this should be true of the measurements at different pairs of times of SQUIDS if local realism is true. Likewise, if the no-conspiracy condition is true in a test of the form Bell discussed in his original paper, this should also be true if local realism is true.
billschnieder said:
4) You do not deny the fact that in the example, there is no way to ensure the data is correctly indexed unless all relevant parameters are known by the experimenters
I would deny that, at least in the limit as the number of data points becomes very large. In this case they could could just pool all their data, and use a random process (like a coinflip) to decide whether each Aa should be put in a pair with an Ab data point or an Ac data point, and similarly for the other two.
billschnieder said:
5) You do not deny that Bell's inequalities involve pairs from a set of triples (a,b,c) and yet experiments involve triples from a set of pairs.
I certainly deny this too, in fact I don't know what you can be talking about here. Different inequalities involve different numbers of possible detector settings, but if you look at any particular experiment designed to test a particular inequality, you always find the same number of possible detector settings in the inequality as in the experiment. If you disagree, point me to a particular experiment where you think this wasn't true!
billschnieder said:
6) You do not deny that it is impossible to measure triples in any EPR-type experiment, therefore Bell-type inequalities do not apply to those experiments.
This one is so obviously silly you really should know better. The Bell-type inequalities are based on the theoretical assumption that on each trial there is a λ which either predetermines a definition outcome for each of the three detector settings (like the 'hidden fruits' that are assumed to be behind each box on my scratch lotto analogy), or at least predetermines a probability for each of the three which is not influenced by what happens to the other particle at the other detector (i.e. P(A|aλ) is not different from P(A|Bbaλ)). If this theoretical assumption were valid, and the probability of different values of λ on each trial did not depend on the detector settings a and b on that trial, then this would be a perfectly valid situation where these inequalities would be predicted to hold. Of course we don't know if these theoretical assumptions actually hold in the real world, but that's the point of testing whether the inequalities hold up in the real world--if they don't, and our experiments meet the necessary observable conditions that were assumed in the derivation, then this constitutes an experimental falsification of one of the predictions of our original theoretical assumptions.
billschnieder said:
Boole had shown 100+ years ago that you can not substitute Rij for Sij in those type of inequalities.
I don't know what you mean by "Rij".
 
Last edited by a moderator:
  • #962
JesseM said:
Huh? I said it was ThomasT who didn't understand Bell's proof, not the majority of physicists.
Oops, my apologies.

Wrt the Dunning-Kruger effect I agree. However, if you applied the same standard to the best educated people of the past they would be subject to the same illusory superiority. Though they likely had the skills to overcome it with the right information. So for those who simply insist X is wrong, you have a point. I'm not denying the validity of Bell's theorem in conclusively ruling out a brand of realism. I'm denying the generalization of that proof to all forms of realism.

I have fallen victim to assuming X, which apparently entailed Y, and tried to maintain X by maintaining Y, only to realize X could be maintained without Y. It happens. But in an interesting subject it's not always in our best interest to take an authority at face value. Rather to question it. The denial and accusations that the authority is wrong, silly, etc., without a very solid and convincing argument is just being a crackpot. Yet authoritative sources can also overestimate the generality a given knowledge endows also.

JesseM said:
I define "local realism" to mean that facts about the complete physical state of any region of spacetime can be broken down into a sum of local facts about the state of individual points in spacetime in that region (like the electromagnetic field vector at each point in classical electromagnetism), and that each point can only be causally influenced by other points in its past light cone. Do you think that this is too "narrowly defined" or that EPR would have adopted a broader definition where the above wasn't necessarily true? (if so, can you provide a relevant quote from them?) Or alternatively, do you think that Bell's derivation of the Bell inequalities requires a narrower definition than the one I've just given?
Ok, that works. But I got no response on what effects the non-commutativity of vector products, even classical vectors, has on the computational demands of modeling BI violations. If these elements are transfinite, what role might Hilbert's paradox of the Grand Hotel play in such effects? EPR correlations are certainly not unique in requiring relative offset verses absolute coordinate values. SR is predicated on it. If observables are projections from a space with an entirely different metric, which doesn't commute with a linear metric of the space we measure, that could impart computational difficulties which BI doesn't recognize. I didn't get any response involving Maxwell's equations either.

I'm not trying to make the point that Bell was wrong, he was absolutely and unequivocally right, within the context of the definition he used. I'm merely rejecting the over generalization of that definition. Even if no such realistic model exist, by any definition, I still want to investigate all these different principles that might be behind such an effect. The authoritative claim that Bell was right is perfectly valid, to over generalize that into a sea of related unknowns, even by authoritative sources, is unwarranted.

JesseM said:
I don't know what you mean by "operational", my definition doesn't appear to be an operational one but rather an objective description of the way the laws of physics might work. If you do think my definition is too narrow and that there are other options, could you give some details on what a broader definition would look like?
Physics is contingent upon operational, not philosophical, claims. What did the original EPR paper say about it?
1) "far from exhausting all possible ways to recognize physical reality"
2) "Regarded not as a necessary, but merely as a sufficient, condition of reality"
3) A comprehensive definition of reality is, however, unnecessary for our purposes"

In other words they chose an definition that had an operational value in making the point more concise, not a definition which defined reality itself.

In a scientific/mathematical field it's only meaningful to use terms like "local realism" if you give them some technical definition which may be different than their colloquial meaning or their meaning in nonscientific fields like philosophy. So if a physicist makes a claim about "local realism" being ruled out, it doesn't really make sense to say the claim is a "fallacy" on the basis of the fact that her technical definition doesn't match how you would interpret the meaning of that phrase colloquially or philosophically or whatever. That'd be a bit like saying "it's wrong to define momentum as mass times velocity, since that definition doesn't work for accepted colloquial phrases like 'we need to get some momentum going on this project if we want to finish it by the deadline'".
True, technical definitions confuse the uninitiated quiet often. Locality is one of them, which is predicated on relativity. Thus it's in principle possible to violate locality without violating realism, as the stated EPR consequences recognizes. Yet if "realism" is technically predicated on the operational definition provided by EPR, why reject out of hand definitions counter to that EPR provided as a source of research? That is factually overstepping the technical bounds on which the "realism" used to reject it is academically defined. That's having your cake and eating it to.

JesseM said:
Not sure what you mean. Certainly there's no need to assume, for example, that when you measure different particle's "spins" by seeing which way they are deflected in a Stern-Gerlach device, you are simply measuring a pre-existing property which each particle has before measurement (so each particle was already either spin-up or spin-down on the axis you measure).
Unless observable are a linear projection from a space which has a non-linear mapping to our measured space of variables, to name just one. Nor does realism necessarily entail that pre-existing properties that are measurable. It doesn't even entail that independent variables have any independent measurable properties whatsoever.

JesseM said:
Don't know what you mean by that either. Any local physical fact can be defined in a way that doesn't depend on a choice of coordinate system, no?
Yes, assuming the variables required are algorithmically compressible or finite. I never got this answer either: what does in mean when you can model a rotation of the beam in an EPR model and maintain BI violations while individual photon paths vary, yet the apparently physical equivalent of uniformly rotating the pair of detectors destroys it? Are they not physically the equivalent transform? Why are physically equivalent transforms not physically equivalent? Perhaps the issue of non-commutative classical vector products needs investigated.

JesseM said:
All forms compatible with my definition of local realism are incompatible with QM. I don't know if you would have a broader definition of "local realism" than mine, but regardless, see my point about the basic independence of the technical meaning of terms and their colloquial meaning.
If that is your definition of local realism, fine. But you can't make claims that your definition precludes alternatives, or that alternatives are precluded by your definition. You want a broader "technical" definition of realism? It's not mine, it came from exactly the same source as yours did. "An element of reality that exist independent of any measurement". That's it. Whatever more you add, its relationship with what is measured or measurable, are presumptions that go beyond the general definition. So I would say that the very original source, on which you derive your claim of a "technical" definition, disavows that particular definition as sufficiently general to constitute a general technical definition. Even without being explicitly aware of the issues it now presents.
 
  • #963
billschnieder said:
And as I explained, I do not engage in these discussions for religious purposes, so I'm surprised why you would expect me to bet on.
A nonreligious person can have intuitions and opinions about the likelihood of various possibilities, like the likelihood that improved gravitational wave detectors will in the near future show that general relativity's predictions about gravitational waves are false. If someone doesn't think this is very likely, I would think it a bit absurd for them to gloat about the lack of experimental confirmation of gravitational waves in an advocate with someone taking the mainstream view that general relativity is likely to be accurate at classical scales.
billschnieder said:
I and others, have raised questions about the premises used to supporting that claim. Rather than explain why the premises are true, you expect me rather to bet that the claim is not true.
As you no doubt remember I gave extended arguments and detailed questions intended to show why your claims that Bell's theorem is theoretically flawed or untestable don't make sense, but you failed to respond to most of my questions and arguments and then abruptly shut down the discussion, in multiple cases (As with my posts here and here where I pointed out that your argument about the failure of the 'principle of common cause' ignored the specific types of conditions where it failed as outlined in the Stanford Encyclopedia article you were using as a reference, and I asked you to directly address my argument about past light cones in a local realist universe without relying on nonapplicable statements from the encyclopedia article. Your response here was to ignore all the specific quotes I gave you about the nature of the required conditions and declare that you'd decided we'd have to 'agree to disagree' on the matter rather than discuss it further...if you ever change your mind and decide to actually address the light cone argument in a thoughtful way, you might start by saying whether you disagree with anything in post #63 here).

Of course the point that Bell inequalities might not actually be violated with loophole-free tests is totally separate from the idea that the proof itself is flawed or that perfect tests are impossible in the first place unless we know the values of all hidden variables and can control for them (the arguments you were making earlier). Unlike with those earlier arguments I don't actually disagree with your basic point that they might not be violated with loophole free tests so there's no need for me to try to argue with you about that, I was just using the idea of betting to point to the absurdity of your gloating attitude about the lack of loophole-free tests. I think this gloating rather typifies your "lawyerly" approach to the subject, where you are trying to cast doubt on Bell using rhetorical strategies rather than examine the issues in a detailed and thoughtful manner.
billschnieder said:
The fact that QM and experiments agree is a big hint that the odd-man out (Bell inequalities) does not model the same thing as QM does, which is what is realized in real experiments.
Uh, the whole point of the Bell inequalities is to prove that the assumed conditions they are modeling (local realism) are incompatible with QM! Do you really not understand this after all this time, or is this just another example of "it sounds good rhetorically, who cares if it's really a plausible argument?"
billschnieder said:
So I'm not sure why you think by repeatedly mentioning the fact that numerous experiments have agreed with QM, it somehow advances your argument. It doesn't.
My "argument" is that Bell has a valid proof that local realism and QM are incompatible, and thus that experimental verification of QM predictions about Bell inequality violations also constitute experimental falsification of local realism. Do you really not understand the very basic logic of deriving certain predictions from theoretical assumptions, showing the predictions don't match reality, and therefore considering that this is experimental evidence that the theory doesn't describe the real world? This is just how any theory of physics would be falsified experimentally!
billschnieder said:
Also the phrase "experimental loopholes" is a misnomer because it gives the false impression that there is something "wrong" with the experiments, such that "better" experiments have to be performed. This is a backward look at it. Every so-called "loophole" is actually a hidden assumption made by Bell in deriving his inequalities.
The loopholes are just based on actual experiments not meeting the observable experimental conditions Bell was assuming would hold in the theoretical experiments that the inequalities are supposed to apply to, like the idea that there should be a spacelike separation between measurements (if an actual experiment doesn't conform to this, it falls prey to the locality loophole). None of them are based on whether the theoretical assumptions about the laws of physics used in the derivation (like the assumption that the universe follows local realist laws) are true or false.

To put it another way, Bell proved that (specified observable experimental conditions, like spacelike separation between measurements) + (theoretical assumptions about laws of physics, like local realism) = (Bell inequalities). So, if a real experiment matches all the observable experimental conditions but does not give results which satisfy the Bell inequalities, that's a good experimental falsification of the theoretical assumptions about the laws of physics that Bell made. On the other hand, if an experiment doesn't match all those observable conditions, then even if it violates Bell inequalities there may still be some remaining possibility that the theoretical assumptions actually do apply in our universe (so our universe might still obey local realist laws)
billschnieder said:
When I mentioned "assumption" previously, you seemed to express surprise, despite the fact that I have already pointed out to you several times hidden assumptions within Bell's treatment that make it incompatible with Aspect-type experiments.
And I've pointed out that some of the "hidden assumptions" you claimed were needed, like controlling for all the hidden variables, were not necessary. In this post you even seemed to be starting to get the point when you asked:
Is it your claim that Bell's "population" is defined in terms of "an infinite set of repetitions of the exact observable experimental conditions you were using"? If that is what you mean here then I fail to see the need to make any fair sampling assumption at all.
To which I responded in post #126 on that thread:
In the part in bold I think I made clear that Bell's proof would only apply to the exact observable experimental conditions you were using if it was true that those conditions met the "basic criteria" I mentioned above. I allowed for the possibility that 100% detector efficiency might be one of the conditions needed--DrChinese's subsequent posts seem to say that the original Bell inequalities do require this assumption, although perhaps you can derive other inequalities if the efficiency lies within some known bounds, and he seemed to say that local realist theories which tried to make use of this loophole would need some other physically implausible features. As I said above in my response to #110 though, I would rather keep the issue of the detector efficiency loophole separate from your other critiques of Bell's reasoning, which would seem to apply even if we had an experiment that closed all these known loopholes (and apparently there was one experiment with perfect detector efficiency but it was vulnerable to a separate known loophole).
But of course that didn't go anywhere because you didn't respond to this, and ended up arguing that frequentist definitions of probability were so inherently horrible that you refused to adopt them even for the sake of argument, even if they were the type of probability likely being assumed by Bell in his proof.
billschnieder said:
If anyone or more of any assumptions in Bell's treatment are not met in the experiments, Bell's inequalities will not apply. The locality assumption is explicit in Bell's treatment, so Bell's proponents think violation of the inequalities definitely means violation of the locality principle. But there are other hidden assumptions such as:

1) Every photon pair will be detected (due to choice of only +/- as possible outcomes)
This is an observable experimental condition (at least it's observable whether every detection at one detector is part of a coincidence with a detection at the other, and it shouldn't be possible to come up with a local hidden variables model where the hidden variables influence the chance of nondetection in such a way that if one photon isn't detected the other's guaranteed not to be either despite the random choice of detector settings, and have this lead to a Bell inequality violation).
billschnieder said:
2) P(lambda) is equivalent for each of the terms of the inequality
This is the no-conspiracy assumption, and given that lambda can represent local facts at a time before the experimenters make a choice of which detector setting to use (with the choice made using any random or pseudorandom method they like), it's not hard to see why a theory that violated this would have some very implausible features.
billschnieder said:
3) Datasets of pairs are extracted from a dataset of triples
As I said in my previous post:
The Bell-type inequalities are based on the theoretical assumption that on each trial there is a λ which either predetermines a definition outcome for each of the three detector settings (like the 'hidden fruits' that are assumed to be behind each box on my scratch lotto analogy), or at least predetermines a probability for each of the three which is not influenced by what happens to the other particle at the other detector (i.e. P(A|aλ) is not different from P(A|Bbaλ)). If this theoretical assumption were valid, and the probability of different values of λ on each trial did not depend on the detector settings a and b on that trial, then this would be a perfectly valid situation where these inequalities would be predicted to hold.
So, this just reduces to the assumption of local realism plus the no-conspiracy assumption, it's not an independent assumption.
billschnieder said:
4) Non-contextuality
As I argued in this post, I think you're incorrect that this is necessary for Bell's proof:
In a local realist theory there is an objective truth about which variables are associated with a given point in spacetime (and the values of those variables). This would include any variables associated with the region of spacetime occupied by the moon, and any associated with the region of spacetime occupied by a human. The variables associated with some humans might correspond to a state that we could label "observing the moon", and the variables associated with other humans might correspond to a state we could label "not observing the moon", but the variables themselves are all assumed to have an objective state that does not depend on whether anyone knows about them.

A "contextual" hidden variables theory is one where knowledge of H is not sufficient to predetermine what results the particle will give for any possible measurement of a quantum-mechanical variable like position or momentum, the conditions at the moment of measurement (like the exact state of the measuring device at the time of measurement) can also influence the outcome--see p. 39 here on google books, for example. This doesn't mean that all fundamental variables (hidden or not) associated with individual points in spacetime don't have definite values at all times, it just means that knowing all variables associated with points in the past light cone of the measurement at some time t does not uniquely determine the values of variables in the region of spacetime where the measurement is made (which tell you the outcome of the measurement).
If we assume that the particles always give the same results (or opposite results) when the same detector settings are used, then we can derive from other assumptions already mentioned that this implies the results for each possible setting must be predetermined (making it a contextual theory), I can explain if you like. But Bell derived inequalities which don't depend on this assumption of predetermined results for each setting, see p. 12 of http://cdsweb.cern.ch/record/142461/files/198009299.pdfpapers where he writes:
It was only in the context of perfect correlation (or anticorrelation) that determinism could be inferred for the relation of observation results to pre-existing particle properties (for any indeterminism would have spoiled the correlation). Despite my insistence that the determinism was inferred rather than assumed, you might still suspect somehow that it is a preoccupation with determinism that creates the problem. Note well then that the following argument makes no mention whatever of determinism.
billschnieder said:
5) ...

And the others I have not mentioned or are yet to be discovered. So whenever you hear about "detection efficiency loophole", the issue really is a failure of hidden assumption (1). And the other example I just gave a few posts back about cyclicity and indexing, involves the failure of (2) and (3).
As I point out above, there aren't really that many independent theoretical assumptions, and any theoretical assumptions beyond local realism would require some very weird conditions (like parallel universes, or 'conspiracies' in past conditions that predetermine what choice the experimenter will make on each trial and tailor the earlier hidden variables to those future choices) in order to be violated.
billschnieder said:
I make an effort to explain my point of view, you are free to completely demolish it with legitimate arguments. I will continue to point out the flaws I see in your responses (as long as a relevant response can be descerned from them)
Ah, so as long as you deem it not "relevant" you are free not to address my central arguments, like not explaining what flaws you saw in my reading of the specific quotes from the Stanford Encyclopedia of Philosophy article on the principle of common cause (since your entire refutation to my past light cone argument ended up revolving around quotes from that article), or not even considering whether the probabilistic statements Bell makes might make sense when interpreted in frequentist terms with the correct understanding of the "population" of experiments (with the 'population' being one defined solely in terms of observable experimental conditions with no attempt to 'control for' the value of hidden variables, so that by the law of large numbers any real experiment matching those conditions should converge on the ideal probabilities in a large number of trials if the basic theoretical assumptions like local realism were valid). Both of these were central to my counterarguments to two of your main anti-Bell arguments, the first being that Bell's equation (2) was not legitimately derivable from the assumption of local realism, the second being that it would be impossible in principle to test whether Bell's theoretical assumptions held in the real world without knowing the value of all hidden variables in each experiment and controlling for them. But since you decided these counterarguments weren't "relevant" you simply didn't give them any substantive response.
billschnieder said:
But if you can not provide a legimate argument and you think the goal of discussion as one of winning/losing, you may be inclined to interprete my conviction about my point of view to be "triumphant/mocking". But that is just your perspective and you are entitled to it, even if it is false.
I'll leave it to others to decide whether quotes like the following have a tone of "triumphant" dismissal or whether they simply express an attitude of caution about whether there is a slight possibility the universe obeys local realist laws that exploit both detection loopholes simultaneously:
Now that this blatant error is clear, let us look at real experiments to see which approach is more reasonable, by looking at what proportion of photons leaving the source is actually detected.

For all Bell-test experiments performed to date, only 5-30% of the photons emitted by the detector have been detected, with only one exception. And this exception, which I'm sure DrC and JesseM will remind us of, had other more serious problems. Let us make sure we are clear what this means.

It means of almost all those experiments usually thrown around as proof of non-locality, P(case4) has been at most 30% and even as low as 30% in some cases. The question then is, where did the whopping 70% go?

Therefore it is clear first of all by common sense, then by probability theory, and finally confirmed by numerous experiments that non-detection IS an issue and should have been included in the derivation of the inequalities!
(from post #930--part in bold sounds a bit 'mocking' to me, and note that the claim of 'only one exception' was posted after my post #152 on the 'Understanding Bell's Logic' thread where I told you that other experiments closing the detection loophole had been done)
Therefore correlations observed in real experiments in which non-detection matters can not be compared to idealized theoretical proofs in which non-detection was not considered since those idealized theoretical proofs made assumptions that will never be fulfilled in any real experiments.
(from post #932--a blanket dismissal of the relevance of all 'real experiments', no nuance whatsoever)
What has this got to do with anything. If there was a convincing experiment which fulfilled all the assumptions in Bell's derivation, I would change my mind. I am after the truth, I don't religiously follow one side just because I have invested my whole life to it.
(from #936--sounds rather mocking again, or was there no implication here that others like myself or DrChinese are religiously following one side because we've invested our lives in it?)
JesseM said:
Don't know about that precise inequality, but as I mentioned in an earlier post:
DId I hear ONE with a but attached?
(from post #151 on 'Understanding Bell's Logic'--again, sounds completely dismissive, no actual interest in what the experiment might tell us about the likelihood of a loophole-exploiting hidden variables theory)
 
Last edited by a moderator:
  • #964
JesseM, regarding intellectual humility, don't ever doubt that I'm very thankful that there are educated people like you and DrC willing to get into the details, and explain your current thinking to feeble minded laypersons, such as myself, who are interested in and fascinated by various physics conundrums.

JesseM said:
It (the nonviability of Bell's 2) implies the falsity of local realism, which means if you are a realist who believes in an objective universe independent of our measurements, and you don't believe in any of the "weird" options like parallel worlds or "conspiracies", your only remaining option is nonlocality/ftl.
I think this is a false dichotomy which is recognized as such by mainstream physicists. Otherwise, why wouldn't all physicists familiar with Bell's work believe that nature is nonlocal (the alternative being that nature simply doesn't exist independent of our measurements)?

You've said that Bell's(2) isn't about entanglement. Then how can it's falsification be telling us anything about the nature of entanglement (such as that entangled disturbances are communicating nonlocally)?

And, if it isn't a correct model of the underlying reality, which is one way of looking at it, then how can it's falsification be telling us that an underlying reality doesn't exist?

As you're well aware, there are many physicists quite familiar with Bell's work who don't agree with your statement of the choices entailed by violations of BIs. If, as has been suggested, a majority of physicists think that nature is nonlocal, then why hasn't there been a paradigm shift reflecting that view? Well, I suggest, a reasonable hypothesis would be simply that a majority of physicists don't think that nature is nonlocal. (Though they might agree with the notion of quantum nonlocality, but more on that below.)

In support of that hypothesis, it's noted that Bohm's explicitly nonlocal theory has been around for 60 years. It occupies a certain niche in theoretical and foundational research. But it's certainly not accepted as the mainstream view.

I respectfully have to reject your assessment of the meaning of Bell's(2) and violations of BIs based on it, and your assessment of the mainstream view on this. My guess is that most physicists familiar enough with BIs to make an informed assessment of their physical meaning do not think that their violation implies either that nature is nonlocal or that there's no reality independent of measurements.

JesseM said:
And in technical subjects like science and math, I think it's perfectly valid to say that if some layman doesn't understand the issues very well but is confused about the justification for some statement that virtually all experts endorse, the default position of a layman showing intellectual humility should be that it's more likely the mistake lies with his/her own understanding, rather than taking it as a default that they've probably found a fatal flaw that all the experts have overlooked and proceeding to try to convince others of that.
You're saying that the "statement that virtually all experts endorse" is the dichotomy that nature is either nonlocal (taking, in keeping with the theme of this thread, the term 'nonlocality' to mean 'action-at-a-distance') or that there is no nature independent of observations. I'm saying that I think that virtually all experts would view that as a false dichotomy. This would seem to require some sort of poll. If I get time to look for one, and find it, then I'll let you know the results.

Of course, there are other views of nonlocality. I think that the term, quantum nonlocality, doesn't mean 'action-at-a-distance' to most physicists. It refers to a certain formalization of certain experimental situations, and the symbolic manipulations entailed by qm. In other words, quantum nonlocality has no particular physical meaning apart from the formalism and the experiments to which it's applied -- ie., it isn't telling us anything about the existence or nature of a reality underlying instrumental behavior.

Local realism refers to the assumption that there is an objective (though unknown) reality underlying instrumental behavior, and that it's evolving in accordance with the principle of local causality. EPR's elements of reality, as defined wrt the specific experimental situation they were considering, represent a special case and subset of local realism.

There are models of entanglement which are, ostensibly, local, but not realistic, or realistic, but not local, or, both local and realistic, which reproduce the qm predictions.
 
  • #965
ThomasT said:
As you're well aware, there are many physicists quite familiar with Bell's work who don't agree with your statement of the choices entailed by violations of BIs.

You're saying that the "statement that virtually all experts endorse" is the dichotomy that nature is either nonlocal (taking, in keeping with the theme of this thread, the term 'nonlocality' to mean 'action-at-a-distance') or that there is no nature independent of observations. I'm saying that I think that virtually all experts would view that as a false dichotomy. This would seem to require some sort of poll.

JesseM indicated correctly. I don't know of a physicist in the field (other than a small group like Santos, Hess, Philipp, etc.) that does NOT agree with JesseM's assessment. Certainly you won't find any mention of dissent on this point in a textbook on the subject. I have given repeated references to roundups on the subject, including yesterday, which makes this clear. In light of JesseM's statement to you, he is politely asking you to quit acting as if your minority view is more widely accepted than it is. It confuses readers like JenniT and others.

You may consider it a "false dichotomy"; but as Maaneli is fond of pointing out, you don't have to take it as a dichotomy at all! You can take it as ONE thing as a whole too: local causality is rejected. That is a complete rejection of your position regardless.

A wise person would have no issue with being a bit more humble. You can express yourself without acting like you know it all. I appreciate that after reviewing the case for Bell/Bell tests, you reject the work of thousands of physicists because of your gut feel on the matter. But that is not something to brag about.
 
  • #966
my_wan said:
JesseM said:
I define "local realism" to mean that facts about the complete physical state of any region of spacetime can be broken down into a sum of local facts about the state of individual points in spacetime in that region (like the electromagnetic field vector at each point in classical electromagnetism), and that each point can only be causally influenced by other points in its past light cone. Do you think that this is too "narrowly defined" or that EPR would have adopted a broader definition where the above wasn't necessarily true? (if so, can you provide a relevant quote from them?) Or alternatively, do you think that Bell's derivation of the Bell inequalities requires a narrower definition than the one I've just given?
Ok, that works. But I got no response on what effects the non-commutativity of vector products, even classical vectors, has on the computational demands of modeling BI violations. If these elements are transfinite, what role might Hilbert's paradox of the Grand Hotel play in such effects?
But Bell's proof is abstract and mathematical, it doesn't depend on whether it is possible to simulate a given hidden variables theory computationally, so why does it matter what the "computational demands of modeling BI violations" are? I also don't understand your point about a transfinite set of hidden variables and Hilbert's Hotel paradox...do you think there is some specific step in the proof that depends on whether lambda stands for a finite or transfinite number of facts, or that would be called into question if we assumed it was transfinite?
my_wan said:
EPR correlations are certainly not unique in requiring relative offset verses absolute coordinate values. SR is predicated on it. If observables are projections from a space with an entirely different metric, which doesn't commute with a linear metric of the space we measure, that could impart computational difficulties which BI doesn't recognize.
I'm not sure what you mean by "projections from a space"...my definition of local realism above was defined in terms of points in our observable spacetime, if an event A outside the past light cone of event B can nevertheless have a causal effect on B then the theory is not local realist theory in our spacetime according to my definition, even if the values of variables at A and B are actually "projections" from a different unseen space where A is in the past light cone of B (is that something like what you meant?)
my_wan said:
I didn't get any response involving Maxwell's equations either.
Response to which question?
my_wan said:
Physics is contingent upon operational, not philosophical, claims. What did the original EPR paper say about it?
1) "far from exhausting all possible ways to recognize physical reality"
2) "Regarded not as a necessary, but merely as a sufficient, condition of reality"
3) A comprehensive definition of reality is, however, unnecessary for our purposes"

In other words they chose an definition that had an operational value in making the point more concise, not a definition which defined reality itself.
They did make the claim that there should in certain circumstances be multiple elements of reality corresponding to different possible measurements even when it is not operationally possible to measure them all simultaneously, didn't they?
my_wan said:
True, technical definitions confuse the uninitiated quiet often. Locality is one of them, which is predicated on relativity. Thus it's in principle possible to violate locality without violating realism, as the stated EPR consequences recognizes.
Sure, Bohmian mechanics would usually be taken as an example of this.
my_wan said:
Yet if "realism" is technically predicated on the operational definition provided by EPR, why reject out of hand definitions counter to that EPR provided as a source of research?
I don't follow, what "definitions counter to that EPR provided" are being rejected out of hand?
JesseM said:
Not sure what you mean. Certainly there's no need to assume, for example, that when you measure different particle's "spins" by seeing which way they are deflected in a Stern-Gerlach device, you are simply measuring a pre-existing property which each particle has before measurement (so each particle was already either spin-up or spin-down on the axis you measure).
my_wan said:
Unless observable are a linear projection from a space which has a non-linear mapping to our measured space of variables, to name just one.
What's the statement of mine you're saying "unless" to? I said "there's no need to assume ... you are simply measuring a pre-existing property which each particle has before measurement", not that this was an assumption I made. Did you misunderstand the structure of that sentence, or are you actually saying that if "observable are a linear projection from a space which has a non-linear mapping to our measured space of variables", then that would mean my statement is wrong and that there is a need to assume we are measuring pre-existing properties the particle has before measurement?
JesseM said:
Don't know what you mean by that either. Any local physical fact can be defined in a way that doesn't depend on a choice of coordinate system, no?
my_wan said:
Yes, assuming the variables required are algorithmically compressible or finite.
Why would infinite or non-compressible physical facts be exceptions to that? Note that when I said "can be defined" I just meant that a coordinate-independent description would be theoretically possible, not that this description would involve a finite set of characters that could be written down in practice by a human. For example, there might be some local variable that could take any real number between 0 and 1 as a value, all I meant was that the value (known by God, say) wouldn't depend on a choice of coordinate system.
my_wan said:
I never got this answer either: what does in mean when you can model a rotation of the beam in an EPR model and maintain BI violations while individual photon paths vary, yet the apparently physical equivalent of uniformly rotating the pair of detectors destroys it? Are they not physically the equivalent transform?
As you rotate the direction of the beams, are you also rotating the positions of the detectors so that they always lie in the path of the beams and have the same relative angle between their orientation and the beam? If so this doesn't really seem physically equivalent to rotating the detectors, since their the relative angle between the detector orientation and the beam would change.
my_wan said:
If that is your definition of local realism, fine. But you can't make claims that your definition precludes alternatives, or that alternatives are precluded by your definition. You want a broader "technical" definition of realism? It's not mine, it came from exactly the same source as yours did. "An element of reality that exist independent of any measurement". That's it.
But that's just realism, it doesn't cover locality (Bohmian mechanics would match that notion of realism for example). I think adding locality forces you to conclude that each basic element of reality is associated with a single point in spacetime, and is causally affected only by things in its own past light cone.
 
  • #967
JesseM said:
It (the nonviability of Bell's 2) implies the falsity of local realism, which means if you are a realist who believes in an objective universe independent of our measurements, and you don't believe in any of the "weird" options like parallel worlds or "conspiracies", your only remaining option is nonlocality/ftl.
ThomasT said:
I think this is a false dichotomy which is recognized as such by mainstream physicists. Otherwise, why wouldn't all physicists familiar with Bell's work believe that nature is nonlocal (the alternative being that nature simply doesn't exist independent of our measurements)?
Many physicists have a basically positivist attitude and don't think it's worth talking about questions that aren't experimentally testable (which by definition includes any questions about what's going on with quantum systems when we aren't measuring them). As I noted though, even if you do want to take a "realist" attitude towards QM, there are a few other "weird" options which allow you to avoid FTL, like the many-worlds interpretation (which is actually very popular among physicists who have opinions about the 'interpretation' of QM), or possibly some form of backwards causality which allows for violations of the no-conspiracy assumption (because the later choice of detector settings can have a backwards influence on the probability the source emits particles with different values of hidden variables). So most realist physicists would probably consider it an open question whether nature takes one of these other "weird" options as opposed to the "weird" option of FTL/nonlocal influences between particles. Either way, I think virtually every mainstream physicist would agree the non-"weird" option of local realism is incompatible with QM theoretically, and can be pretty safely ruled out based on experiments done so far even if none has been completely perfect.
ThomasT said:
You've said that Bell's(2) isn't about entanglement. Then how can it's falsification be telling us anything about the nature of entanglement (such as that entangled disturbances are communicating nonlocally)?
Because it's about constraints on the statistics in experiments which meet certain experimental conditions, given the theoretical assumption of local realism--since QM's predictions about entanglement say that these statistical constraints will be violated in experiments meeting those same specified experimental conditions, that shows that QM and local realism are incompatible with one another.
ThomasT said:
And, if it isn't a correct model of the underlying reality, which is one way of looking at it, then how can it's falsification be telling us that an underlying reality doesn't exist?
Because it's a general model of any possible theory that would qualify as "local realist" as physicists understand the term, which I take as basically equivalent to the definition I gave my_wan:
I define "local realism" to mean that facts about the complete physical state of any region of spacetime can be broken down into a sum of local facts about the state of individual points in spacetime in that region (like the electromagnetic field vector at each point in classical electromagnetism), and that each point can only be causally influenced by other points in its past light cone.
So, a falsification of the predictions of this general model constitutes a falsification of "local realism" as in my definition above.
ThomasT said:
As you're well aware, there are many physicists quite familiar with Bell's work who don't agree with your statement of the choices entailed by violations of BIs.
"Many" who aren't regarded as crackpots by the mainstream community? (i.e. not someone like Kracklauer who would fit 't Hooft's description of a bad theoretical physicist very well) If so, can you give some examples? DrChinese, who's a lot more familiar with the literature on this subject than I, said:
I don't know of a physicist in the field (other than a small group like Santos, Hess, Philipp, etc.) that does NOT agree with JesseM's assessment. Certainly you won't find any mention of dissent on this point in a textbook on the subject. I have given repeated references to roundups on the subject, including yesterday, which makes this clear.
ThomasT said:
If, as has been suggested, a majority of physicists think that nature is nonlocal
I don't necessarily think a majority would endorse that positive conclusion, for the reasons I gave above. But virtually everyone would agree local realism can be ruled out, aside from a few "weird" variants like the ones I mentioned involving violations of various conditions that appear in rigorous versions of Bell's argument (like the no-conspiracy condition).
ThomasT said:
I respectfully have to reject your assessment of the meaning of Bell's(2)
You "reject" it without being willing to engage with my specific arguments as to why it's implied by local realism, like the one about past light cones in post #63 here which I've directed you to a few times, and also without being willing to answer my detailed questions about your claims to have an alternative model involving polarization vectors. This doesn't sound like the attitude of an open-minded inquirer into truth, but rather someone with an axe to grind against Bell based on gut feelings that there must be some flaw in the argument even if you can't quite pinpoint what it is.
ThomasT said:
and violations of BIs based on it, and your assessment of the mainstream view on this. My guess is that most physicists familiar enough with BIs to make an informed assessment of their physical meaning do not think that their violation implies either that nature is nonlocal or that there's no reality independent of measurements.
As I said, there are other options besides "nature is nonlocal" or "no reality independent of measurements", including both the popular many-worlds interpretation and the even more popular positivist attitude of not caring about any questions that don't concern measurements (or at least not thinking them subjects for science).
ThomasT said:
You're saying that the "statement that virtually all experts endorse" is the dichotomy that nature is either nonlocal (taking, in keeping with the theme of this thread, the term 'nonlocality' to mean 'action-at-a-distance') or that there is no nature independent of observations.
No, I had already mentioned other "weird" options like parallel universes or violations of the no-conspiracy condition in previous posts to you.
ThomasT said:
Local realism refers to the assumption that there is an objective (though unknown) reality underlying instrumental behavior, and that it's evolving in accordance with the principle of local causality. EPR's elements of reality, as defined wrt the specific experimental situation they were considering, represent a special case and subset of local realism.
Your definition of "local realism" seems to match the one I gave to my_wan, and Bell's proof is broad enough to cover all possible theories that would be local realist in this sense.
ThomasT said:
There are models of entanglement which are, ostensibly, local, but not realistic, or realistic, but not local, or, both local and realistic, which reproduce the qm predictions.
There are no "models of entanglement which are ... both local and realistic, which reproduce the qm predictions", at least not ones which match the other conditions in Bell's proof like each measurement having a unique outcome (no parallel universes) and no "conspiracies" creating correlations between random choice of detector settings and prior values of hidden variables (and again, his equation (2) is not an independent condition, it follows logically from the other conditions). If you disagree, please point to one!
 
  • #968
JesseM said:
Are you saying that Leggett and Garg themselves claimed that their inequality should apply to situations where the three values a,b,c don't represent times of measurement, including the scenario with doctors collecting data on patients from different countries?
Your changing argument against this counter-example, has been mostly dismissive

First you tried to suggest that the inequality I provided was not the same as the one of Leggett and Garg, when a simple check of the LG original article should have revealed it right there. Then you tried suggesting that the inequality does not apply to the counter example I presented, pointing to an appendix of an unpublished thesis (and we are not even sure if the guy passed) as evidence to support your claim.

All along, you make no effort to actually understand what I am telling you. And this is the pattern with your responses. As soon as you see a word in an opposing post, you immediately think you know what the point is and you reproduce your pre-canned recipes of counter-arguments without making an effort to understand the specific opposing argument being made. And your recent diatribe about a previous discussion on PCC shows the same, combined with selective memory of those discussions which are in the open for anyone to read. The following analogy summarizes your approach.

Person1: " 1 apple + 1 orange is not equivalent to 2 pears"
JesseM: "1 + 1 = 2, I can prove it ... <insert 5 pages of extensive text and proofs> ... Do you disagree?"
Person1: "Your response is irrelevant to the issue"
JesseM: "Are you going to answer my question or not?
Person1: <ignores JesseM>
JesseM: <50 posts and 10 threads later> "The fact that you refused to respond to my question in post <##> shows that you are only interested in rhetoric"

Now back to the subject of LGI, I have repeatedly told you it doesn't matter what a,b,c are, any inequalities of that mathematical form will be violated if the data being compared to the inequalities are not correctly indexed to maintain the cyclicity. I have very clearly explained this numerous times. Don't you realize it is irrelevant to my argument to then try to prove to me that Leggett and Garg used ONLY time to derive their inequalities? Just because LG used time to arrive at their inequalities does not mean correctly indexing the data is not required. I have given you a reference to an article by Boole more than century ago in which he derived similar inequalities using just boolean algebra without any regard to time, yet you complain that the article is too long and you don't like Boole's notation. The language may be dated but the notation quite clear, if you actually read the text to find out what the symbols mean. Well, here is a simplified derivation using familiar symbols so that there can be no escape from the fact that such inequalities can be derived from a purely mathematical basis:

Define a boolean variable v such that v = 0,1 and \overline{v} = 1 - v
Now consider three such boolean variables x, y, z which can occur together in any experiment

It therefore follows that:
1 = \overline{xyz}+x\overline{yz}+x\overline{y}z+\overline{x}y\overline{z}+xy\overline{z}+\overline{xy}z+\overline{x}yz + xyz

We can then group the terms as follows so that each group in parentheses can be reduced to products of only two variables.

1 = \overline{xyz}+(x\overline{yz}+x\overline{y}z)+(\overline{x}y\overline{z}+xy\overline{z})+(\overline{xy}z+\overline{x}yz) + xyz

Performing the reduction, we obtain:
1 = \overline{xyz}+(x\overline{y})+(y\overline{z})+(\overline{x}z) + xyz

Which can be rearranged as:
x\overline{y}+y\overline{z}+\overline{x}z = 1 - (\overline{xyz} + xyz)

But since the last two terms on the RHS are either 0 or 1, you can write the following inequality:
x\overline{y}+y\overline{z}+\overline{x}z \leq 1

This is Boole's inequality and you can find similar ones on pages 230 and 231 of Boole's article.
In Bell-type situations, we are interested not in boolean variables of possible values (0,1) but in variables with values (+1, -1) so we can define three such variables a, b, c wheret a = 2x - 1 , b = 2y - 1 and c = 2z -1

Remembering that \overline{x} = 1 - x, and substituting in the above inequality maintaining on the LHS only terms involving products of pairs, you obtain the following inequality

-ab - ac - bc \leq 1

from which you can obtain the following inequality by replacing a with -a.

ab + ac - bc \leq 1

These two inequalities can be combined into the form

|ab + ac| \leq 1 + bc

Which is essentially Bell's inequality. If you doubt this result, you can try doing the math yourself and confirm that this is valid. Note that we have derived this from simply by assuming that we have three dichotomous variables occurring together from which we extract products of pairs, using simple algebra without any assumptions about time, locality, non-invasiveness, past light-cones or even hidden variables etc. Therefore their violation by data does not mean anything other than a mathematical problem with the way the data is treated. The counter-example I presented shows this very clearly, that is why when you keep focusing on "time", or "non-invasiveness", thinking that it addresses the issue, I do not take you seriously. So try and understand the opposing argument before you attempt countering it.
 
Last edited:
  • #969
JesseM said:
A violation of the inequalities by data which doesn't match the conditions Bell and Leggett-Garg and Boole assumed when deriving them doesn't indicate a flaw in reasoning which says the inequalities should hold if the conditions are met.

Here again you are arguing that 1 + 1 = 2. Completely ignoring the point, which simply stated is this:
"Violation of the inequalities derived by using a series of assumptions (Ai, i=1,2,3,...,n) by data, means ONLY that one or more of the assumptions (Ai, i=1,2,3,...,n) is false!"
If A1 = "Locality", and you conclude that violation of the inequality implies non-locality, you are being intellectually dishonest, because you know very well that failure of any of the other assumptions can lead to violations even if the locality assumption is true. This is the whole point of the discussion! Again, if you were actually trying to understand my argument, you would have realized this a while ago.
If you insist that the inequalities were derived precisely to describe the Aspect-type experimental situation, as you have sometimes claimed previously, then I will argue that the inequalities are flawed because for the numerous reasons presented here and well recognized in the mainstream, no single experiment has yet satisfied all the assumptions inherent in their derivation. However, if you insist that the inequalities only apply to some ideal experiments which fulfill those assumptions, as I have mentioned many times previously and I doubt anyone here believes otherwise, then those idealized inequalities, however perfect they are, can not be compared to real experiments unless there is independent justification of correspondence between the data from these experiments and the terms within the inequalities. So in case you want to continue to provide proof that 1 + 1 = 2, read this paragraph again and make sure you understand the point.

... from an objective point of view (the point of view of an all-knowing omniscient being)
Again, you are trying to argue that 1 + 1 = 2. How many times will I tell you that experiments are not performed by omniscient observers before it will sink in? You can imagine all you want about an omniscient being, but your imagination will not be comparable to a real experiment by real experimenters.

In the frequentist understanding of probability, this means that in the limit as the number of sample pairs goes to infinity, the frequency at which any given triplet (or any given ordered pair of triplets if the two members of the sample pair are taken from different triplets) is associated with samples of type AaAb should be the same as the frequency at which the same triplet is associated with samples of type AaAc and AbAc,
...
this isn't a case where each sample is taken from a "data point" consisting of triplet of objective (hidden) facts about a,b,c, such that the probability distribution on triplets for a sample pair AaAb is the same as the probability distribution on triplets for the other two sample pairs AaAc and AbAc. In the frequentist understanding of probability, this means that in the limit as the number of sample pairs goes to infinity, the frequency at which any given triplet (or any given ordered pair of triplets if the two members of the sample pair are taken from different triplets) is associated with samples of type AaAb should be the same as the frequency at which the same triplet is associated with samples of type AaAc and AbAc. If the "noninvasive measurability" criterion is met in a Leggett-Garg test, this should be true of the measurements at different pairs of times of SQUIDS if local realism is true. Likewise, if the no-conspiracy condition is true in a test of the form Bell discussed in his original paper, this should also be true if local realism is true.
Are you making a point by this? You just seem to be rehashing here, exactly what is already mentioned in the paper, the fact that to a non-omniscient being without knowledge of all the factors in play, A1a is not different from Aa, which is precisely why the inequality is violated. So it is unclear what your point is.

4) You do not deny the fact that in the example, there is no way to ensure the data is correctly indexed unless all relevant parameters are known by the experimenters

I would deny that, at least in the limit as the number of data points becomes very large. In this case they could could just pool all their data, and use a random process (like a coinflip) to decide whether each Aa should be put in a pair with an Ab data point or an Ac data point, and similarly for the other two.
This is why I asked you to read the paper in full, because you do not know what you are talking about here. The experimenters did not suspect that the location of the test was an important factor so their data was not indexed for location. That means, they do not have anything data point such as A1a(n). All they have is Aa(n). So I'm not sure what you mean by the underlined text. Also note that they are calculating averages of all their data, so I'm not sure why you would think randomly selecting them will make a difference.

Imagine having a bit-mapped image, and you want to extract pixels from it, randomly. For each pixel you you record down a dataset of triples of properties (x position, y position, and color). From the final dataset of triples, you can reconstruct the image. Now instead of collecting one dataset of triples, you collect two datasets of pairs (x, y) and (y, color), what you are suggesting here is similar to the idea that you can still generate the image by randomly deciding which pair from the first dataset, should be matched with a pair from the second data set!

5) You do not deny that Bell's inequalities involve pairs from a set of triples (a,b,c) and yet experiments involve triples from a set of pairs.
I certainly deny this too, in fact I don't know what you can be talking about here.

In Bell's treatment the terms a, b, c represent a triple of angles for which it is assumed that a specific particle, will have values for specific hidden elements of reality. The general idea which DrC and yourself have mentioned several times, usually goes like this "the particle has a specific polarization/spin for those different settings which exists before any measurement is made" and you have often called this "the realism assumption". So according to Bell, for each pair of particles under consideration, at least in the context of Bell's inequalities, there are three properties corresponding to (a,b,c). From these, Bell derives the inequality of the form
1 + E(b,c) >= |E(a,b) - E(a,c)|
Clearly, each term in the inequality involves a pair extracted from the triple (a,b,c). You could say the inequality involves a triple of pairs extracted from an ideal dataset of triples. In an actual experiment, we have ONLY two stations, so we can only have two settings at a time. Experimenters then collect a dataset which involves just pairs of settings. Therefore, to generate terms for the above inequalities from the data, the triple of pairs will have to be extracted from a dataset of pairs. Bell proponents think it is legitimate to substitute pairs extracted from a dataset of triples with pairs extracted from a dataset of pairs. (Compare with the image analogy above)

1) You do not deny that it is impossible to measure triples in any EPR-type experiment, therefore Bell-type inequalities do not apply to those experiments.
This one is so obviously silly you really should know better. The Bell-type inequalities are based on the theoretical assumption that on each trial there is a λ which either predetermines a definition outcome for each of the three detector settings (like the 'hidden fruits' that are assumed to be behind each box on my scratch lotto analogy) ...
Another example of answering without understanding the point you are arguing against. First, I have already pointed out to you that you can not compare an idealized theoretical construct with an actual experiment unless you can demonstrate that the terms in your idealized theoretical construct, correspond to elements in the experiment. Secondly, I have explained why the fact that Aspect-type experiments only produce pairs of data points is a problem for anyone trying to compare those experiments with Bell inequalities. So, rather than throwing insults, if you know of an experiment in which a specific pair of entangled particles are measured at three different angles (a,b,c), then point it out.

I don't know what you mean by "Rij".
Try to derive the inequalities I derived above using the three variables but for which only two can occur together in any experiment. It can not be done. This demonstrates conclusively that you can not substitute a triplet of pairs extracted from a dataset of pairs, into an inequality involving a triplet of pairs extracted from a dataset of triples. (see the image analogy above)
 
  • #970
billschnieder said:
In Bell's treatment the terms a, b, c represent a triple of angles for which it is assumed that a specific particle, will have values for specific hidden elements of reality. The general idea which DrC and yourself have mentioned several times, usually goes like this "the particle has a specific polarization/spin for those different settings which exists before any measurement is made" and you have often called this "the realism assumption". So according to Bell, for each pair of particles under consideration, at least in the context of Bell's inequalities, there are three properties corresponding to (a,b,c). From these, Bell derives the inequality of the form

1 + E(b,c) >= |E(a,b) - E(a,c)|

Clearly, each term in the inequality involves a pair extracted from the triple (a,b,c). You could say the inequality involves a triple of pairs extracted from an ideal dataset of triples. In an actual experiment, we have ONLY two stations, so we can only have two settings at a time. Experimenters then collect a dataset which involves just pairs of settings. Therefore, to generate terms for the above inequalities from the data, the triple of pairs will have to be extracted from a dataset of pairs. Bell proponents think it is legitimate to substitute pairs extracted from a dataset of triples with pairs extracted from a dataset of pairs. (Compare with the image analogy above)

Yes, I think that is a fair assessment of some of the key ideas of Bell. I think it is well understood that there are some sampling issues but that for the most part, they change little. Again, I realize you think sampling is a big "loophole" but few others do.

The fact that doesn't change, no matter how you cut it, is the one item I keep bringing up: It is not possible to derive a dataset for ONE sample of particles that provides consistency with QM statistics. In other words, forget entangled pairs... that is merely a device to test the underlying core issue. Once you accept that no such dataset is possible, which I know you do, then really the entire local realistic house of cards comes down. I know you don't accept that conclusion, but that is it for most everyone else.
 
  • #971
DrChinese said:
The fact that doesn't change, no matter how you cut it, is the one item I keep bringing up: It is not possible to derive a dataset for ONE sample of particles that provides consistency with QM statistics. In other words, forget entangled pairs... that is merely a device to test the underlying core issue. Once you accept that no such dataset is possible, which I know you do, then really the entire local realistic house of cards comes down. I know you don't accept that conclusion, but that is it for most everyone else.

The QM statistics are predicting precisely the outcome of those experiments, the experiments agree with QM, so the data from those experiments is already a dataset which agrees with QM, what more do you want. You will have to define precisely the experiment you want us to produce a dataset for and also provide the QM prediction for the specific experiment you describe. Asking that we produce a dataset from one type of experiment (which can never actually be performed), which matches the predictions QM gives for another type of experiment, will not be serious.
 
  • #972
DrChinese said:
They are often used differently in different contexts. The key is to ask: what pairs am I attempting to collect? Did I collect all of those pairs? Once I collect them, was I able to deliver them to the beam splitter? Of those photons going through the beam splitter, what % were detected? By analyzing carefully, the experimenter can often evaluate these questions. In state of the art Bell tests, these can be important - but not always. Each test is a little different. For example, if fair sampling is assumed then strict evaluation of visibility may not be important. But if you are testing the fair sampling assumption as part of the experiment, it would be an important factor.
Wrong. You are confusing visibility with detection efficiency.
Visibility is roughly speaking signal/noise ratio. If visibility is too low then you don't violate Bell inequalities (or CHSH) even assuming fair sampling.
So visibility is always important.

DrChinese said:
Clearly, the % of cases where there is a blip at Alice's station but not Bob's (and vice versa) is a critical piece of information where fair sampling is concerned. If you subtract that from 100%, you get a number. I believe this is what is referred to as visibility by Zeilinger but honestly it is not always clear to me from the literature. Sometimes this may be called detection efficiency. At any rate, there are several distinct issues involved.
You might confuse (correlation) visibility with detection efficiency but there is absolutely no reason to assume that authors of the paper have such confusion.

DrChinese said:
Keep in mind that for PDC pairs, the geometric angle of the collection equipment is critical. Ideally, you want to get as many entangled pairs as possible and as few unentangled as possible. If alignment is not correct, you will miss entangled pairs. You may even mix in some unentangled pairs (which will reduce your results from the theoretical max violation of a BI). There is something of a border at which getting more entangled is offset by getting too many more unentangled. So it is a balancing act.
This concerns visibility. But to have high coincidence rate we should have high coupling efficiency and for that we should look at coupled photons versus uncoupled (single) photons (as opposed to entangled versus unentangled pairs).
If we observe high coincidence rate in result we certainly have high detection efficiency and high coupling efficiency. But of course we can have high detection efficiency but low coupling efficiency because of poor configuration of source and in that case there is no use from high detection efficiency because coincidence rate will be low anyways.
 
  • #973
JesseM said:
See for example this paper and this one...the discussion seems fairly specific.
Well you see the problem here is that authors of these papers assume that detector efficiency is the only obstacle toward eliminating detection loophole.
But if you look at actual experiments the picture seems a bit different. There does not seem to be any improvement in coincidence detection rate for full setup when you use detectors with high efficiency. Coincidence detection rate is still around 10% for experiments with high coincidence visibility.
The crucial part in this is another peace of equipment that is used in experiments. There are frequency interference filters between PBS and detectors. If you remove them you increase coincidence detection rate but reduce visibility for measurements in +45°/-45° base.
And there are no suggestions how you could get rid of them (or move to another place) while preserving high visibility.

So there does not seem to be clear road toward loophole free experiments.
And my position is that there won't be such experiments in a year or ten years or ever.
 
  • #974
zonde said:
1. Well you see the problem here is that authors of these papers assume that detector efficiency is the only obstacle toward eliminating detection loophole.
But if you look at actual experiments the picture seems a bit different. There does not seem to be any improvement in coincidence detection rate for full setup when you use detectors with high efficiency. Coincidence detection rate is still around 10% for experiments with high coincidence visibility.

2. So there does not seem to be clear road toward loophole free experiments.
And my position is that there won't be such experiments in a year or ten years or ever.

1. I don't necessarily doubt the 10% figure, I just can't locate a reference that clearly states this. And I have looked. The number I am trying to find is:

Alice where there is a matching Bob / Total Alice
(and same for Bob)

To me, that leads to what I think of as visibility. That is probably not the right label, but I have had a difficult time getting a clear picture of how this is calculated and presented.


2. Although the so-called "loophole-free" experiments are scientifically desirable, their absence is not even close to meaning much at all. You are welcome to wait for that, for virtually everyone else the existing evidence is overwhelming. Local realism has failed every single test devised to date (when compared to QM). And that is quite a few.
 
  • #975
zonde said:
But if you look at actual experiments the picture seems a bit different. There does not seem to be any improvement in coincidence detection rate for full setup when you use detectors with high efficiency. Coincidence detection rate is still around 10% for experiments with high coincidence visibility.
This may be true if you're talking about experiments with pairs of entangled photons, but other types of entanglement experiments have been performed where the detection efficiency was close to 100%, although these experiments are vulnerable to the locality loophole. See here and here for example. If you look at the papers proposing loophole-free experiments that I gave you links to earlier, the proposals are also ones that don't involve photon pairs but rather other types of entangled systems.
 
  • #976
DrChinese said:
Certainly you won't find any mention of dissent on this point in a textbook on the subject.
The textbook that I learned qm from didn't say anything about nature being nonlocal.

DrChinese said:
In light of JesseM's statement to you, he is politely asking you to quit acting as if your minority view is more widely accepted than it is.
My view is that Bell doesn't require me to assume that nature is nonlocal, which JesseM seems to indicate might well be the majority view:

JesseM said:
I don't necessarily think a majority would endorse that positive conclusion (that nature is nonlocal).

DrChinese said:
It confuses readers like JenniT and others.
I don't think that my simplistic, feeble minded observations, questions or assertions (note the intellectual humility) could possibly confuse anyone -- and certainly not JenniT. Your stuff, on the other hand, is either very deep or very confused. Either way I still have the utmost respect for your and JesseM's , and anyone else's for that matter, attempts to enlighten me wrt Bell-related stuff. If my 'know it all' style is sometimes annoying, then at least that part of my program is successful. Just kidding. Try to block that out and only focus on what I'm saying, or what you think I'm trying to say. The bottom line is that I really don't feel that I fully understand it. Am I alone in this? I don't think so. Anyway, we have these wonderful few threads here at PF actively dealing with Bell's stuff, and for the moment I'm in a sort of philosophy/physics Hillbilly Heaven of considerations of Bell's theorem. Not that the stuff in the thread(s) is (necessarily) all that profound, and not that I would know anyway (more intellectual humility), but that it's motivating me (and I'll bet others too) to research this in ways that I (and they) probably wouldn't take the time to do otherwise (without these threads).

DrChinese said:
You may consider it a "false dichotomy"; but as Maaneli is fond of pointing out, you don't have to take it as a dichotomy at all! You can take it as ONE thing as a whole too: local causality is rejected. That is a complete rejection of your position regardless.
Ok, it's not a dichotomy. Then the nonlocality of nature is the inescapable conclusion following Bell. So why isn't this the general paradigm of physics? Why isn't this taught in physics classes? Why, as JesseM says he thinks, and as I would agree with, don't a majority of physicists endorse the conclusion that nature is nonlocal? Why bother with any 'mediating' physics at all if Bell has shown this to be impossible?

DrChinese said:
A wise person would have no issue with being a bit more humble.
But I am humble. See above. And wise. See below.

DrChinese said:
You can express yourself without acting like you know it all. I appreciate that after reviewing the case for Bell/Bell tests, you reject the work of thousands of physicists because of your gut feel on the matter. But that is not something to brag about.
I have the gut feeling that you might be exaggerating. Am I wrong? (Would 'hundreds of physicists' be a closer estimate? Or, maybe, 87?)

By the way DrC (and others), I'm going to be out blowing stuff up with various explosives and lighting things on fire with various lenses in commemoration of our independence or whatever. Plus lots of hotdogs with jalapenos, cheese and mustard -- and beer! HAPPY 4TH OF JULY!
 
  • #977
JesseM said:
I define "local realism" to mean that facts about the complete physical state of any region of spacetime can be broken down into a sum of local facts about the state of individual points in spacetime in that region (like the electromagnetic field vector at each point in classical electromagnetism), and that each point can only be causally influenced by other points in its past light cone.

ThomasT said:
Local realism refers to the assumption that there is an objective (though unknown) reality underlying instrumental behavior, and that it's evolving in accordance with the principle of local causality. EPR's elements of reality, as defined wrt the specific experimental situation they were considering, represent a special case and subset of local realism.

JesseM said:
Your definition of "local realism" seems to match the one I gave to my_wan, and Bell's proof is broad enough to cover all possible theories that would be local realist in this sense.
That's the question: is Bell's proof broad enough to cover all possible LR theories?

Certainly, I agree with you, and understand why, Bell's theorem, as developed by Bell, disallows any and all LHV or LR theories that conform to Bell's explicit formulation of such theories. That is, such models, conforming to the explicit requirements of Bell, must necessarily be incompatible with qm (and, as has been demonstrated, with experiments). The ONLY question about this, afaik, concerns the generality of Bell's LHV or LR model. In connection with this consideration, LR models of entanglement have been proposed which do reproduce the qm predictions.

JesseM said:
There are no "models of entanglement which are ... both local and realistic, which reproduce the qm predictions", at least not ones which match the other conditions in Bell's proof like each measurement having a unique outcome (no parallel universes) and no "conspiracies" creating correlations between random choice of detector settings and prior values of hidden variables (and again, his equation (2) is not an independent condition, it follows logically from the other conditions). If you disagree, please point to one!
Ok. Here's one, posted in another (Bell) thread by Qubix, which I've taken some time to try to understand. I think it's conceptually equivalent to what I've been saying about the joint experimental context measuring something different than the individual experimental context.

Disproofs of Bell, GHZ, and Hardy Type Theorems and the Illusion of Entanglement
http://uk.arxiv.org/abs/0904.4259

No one has responded to it (in the thread "Bell's mathematical error") except DrC:

DrChinese said:
Christian's work has been rejected. But that is not likely to stop him. He fails test #1 with me: his model is not realistic.
We're waiting for DrC to clarify his 'realism' requirement -- truly a puzzlement in its own right.

Yes, Christian's work (on this) has been 'rejected'. However, the supposed rebuttals have themselves been rebutted. As it stands now, there has been little or no interest in Christian's work, afaik, on Bell's theorem for about 3 years. Much like Bell's first (famous) paper in the 3 years following it's publication.

The abstract:
An elementary topological error in Bell's representation of the EPR elements of reality is identified. Once recognized, it leads to a topologically correct local-realistic framework that provides exact, deterministic, and local underpinning of at least the Bell, GHZ-3, GHZ-4, and Hardy states. The correlations exhibited by these states are shown to be exactly the classical correlations among the points of a 3 or 7-sphere, both of which are closed under multiplication, and hence preserve the locality condition of Bell. The alleged non-localities of these states are thus shown to result from misidentified topologies of the EPR elements of reality. When topologies are correctly identified, local-realistic completion of any arbitrary entangled state is always guaranteed in our framework. This vindicates EPR, and entails that quantum entanglement is best understood as an illusion.

And an excerpt from the Introduction:
Hence Bell’s postulate of equation (1) amounts to an implicit assumption of a specific topology for the EPR elements of reality. In what follows, we shall be concerned mainly with the topologies of the spheres S0, S1, S2, S3, and S7, each of which is a set of binary numbers parameterized by Eq. (3), but with very different topologies from one another. Thus, for example, the 1-sphere, S1, is connected and parallelizable, but not simply connected. The spheres S3 and S7, on the other hand, are not only connected and parallelizable, but also simply connected. The crucial point here is that—since the topological properties of different spheres are dramatically different from one another—mistaking the points of one of them for the points of another is a serious error. But that is precisely what Bell has done.

Hopefully, someone is going to actually read Christian's paper and make some knowledgeable comments wrt it's contentions -- rather than simply say that it's been rejected. Afaik, Christian's paper is unrefuted and generally unrecognized.
 
  • #978
ThomasT said:
1. The textbook that I learned qm from didn't say anything about nature being nonlocal.

My view is that Bell doesn't require me to assume that nature is nonlocal, which JesseM seems to indicate might well be the majority view:

2. But I am humble. See above. And wise. See below.

3. By the way DrC (and others), I'm going to be out blowing stuff up with various explosives and lighting things on fire with various lenses in commemoration of our independence or whatever. Plus lots of hotdogs with jalapenos, cheese and mustard -- and beer! HAPPY 4TH OF JULY!

1. There is a big difference in this and what I said. You aren't going to find textbooks promoting local realism, and you know it.

Whether nature is nonlocal or not is not what I am asserting. As I have said till I'm blue, nature might be nonrealistic. Or both. So you are being a bit misleading when you comment as you have.

A NOTE FOR EVERYONE: nonlocal could mean a lot of things. The Bohmian crew has one idea. Nonrealistic could mean a lot of things too. MWIers have an idea about this. But nonlocal could mean other things too - like that wave functions can be nonlocal, or that there are particles that travel FTL. So defining nonlocality still has a speculative element to it. I happen to subscribe to the kind of nonlocality that is consistent with the HUP. So if you think the HUP implies some kind of nonlocality, well, there's the definition. And that makes HUP believers into believers of a kind of nonlocality. I call that quantum nonlocality. And I think that is a fairly widespread belief, although I have nothing specific to back that up.

2. Perhaps your humility is one of your best traits. I know it is one of mine!

3. Have fun. And save a beer for me.
 
  • #979
ThomasT said:
1. We're waiting for DrC to clarify his 'realism' requirement -- truly a puzzlement in its own right.

I can see why it is hard to understand.

a) Fill any a set of hidden variables for angle settings 0, 120 and 240 degrees for a group of hypothetical entangled photons.
b) This should be accompanied by a formula that allows me to deduce whether the photons are H> or V> polarized, based on the values of the HVs.
c) The results should reasonably match the predictions of QM, a 25% coincidence rate, regardless of which 2 different settings I might choose to select. I will make my selections randomly, before I look at your HVs but after you have established their values and the formula.

When Christian shows me this, I'll read more. Not before, as I am quite busy: I must wash my hair tonight.
 
  • #980
DrChinese said:
Whether nature is nonlocal or not is not what I am asserting. As I have said till I'm blue, nature might be nonrealistic. Or both. So you are being a bit misleading when you comment as you have.
If nature is nonrealistic, then it must necessarily be true that quantum correlations are nonlocal. So, as far as I can tell, that's what you're saying, ie., that Bell entails that there is no nature ... nothing ... underlying instrumental phenomena. But how could you, or Bell, or anybody, possibly know that? From a theorem? Maybe you've made a mistake somewhere in your thinking about this!
 
  • #981
DrChinese said:
I can see why it is hard to understand.

a) Fill any a set of hidden variables for angle settings 0, 120 and 240 degrees for a group of hypothetical entangled photons.
b) This should be accompanied by a formula that allows me to deduce whether the photons are H> or V> polarized, based on the values of the HVs.
c) The results should reasonably match the predictions of QM, a 25% coincidence rate, regardless of which 2 different settings I might choose to select. I will make my selections randomly, before I look at your HVs but after you have established their values and the formula.

When Christian shows me this, I'll read more. Not before, as I am quite busy: I must wash my hair tonight.
Your 'realism' requirement remains a mystery. Christian's paper is there for you to critique. I don't think you understand it.
 
  • #982
my_wan said:
... I'm not trying to make the point that Bell was wrong, he was absolutely and unequivocally right, within the context of the definition he used. I'm merely rejecting the over generalization of that definition. Even if no such realistic model exist, by any definition, I still want to investigate all these different principles that might be behind such an effect. The authoritative claim that Bell was right is perfectly valid, to over generalize that into a sea of related unknowns, even by authoritative sources, is unwarranted.


my_wan, could we make a parallel to the situation when Albert Einstein, over a hundred years ago, started to work on his theory of relativity? (Note, I'm not saying that you are Einstein! :smile:)

Einstein did not reject the work of Isaac Newton. In most ordinary circumstances, Newton's law of universal gravitation is perfectly valid, and Einstein knew that. The theory of relativity is 'merely' an 'extension' to extreme situations, where we need finer 'instrumentation'.

Beside this fact; Einstein also provided a mechanism for gravity, which thus far had been a paradox, without any hope for a 'logical' explanation. As Newton put it in a letter to Bentley in 1692:

"That one body may act upon another at a distance through a vacuum without the mediation of anything else, by and through which their action and force may be conveyed from one another, is to me so great an absurdity that, I believe, no man who has in philosophic matters a competent faculty of thinking could ever fall into it."

If we look at the situation today, there are a lot of ('funny' "action at a distance") similarities. QM & SR/GR works perfectly fine side by side, in most 'ordinary' circumstances. It's only in very rare situations that we see clear signs of something that looks like an undisputable contradiction between Quantum mechanics and Relativity.

John Bell did absolutely not dispute the work of the grandfathers of QM & SR/GR - he was much too intelligent for that. It's only cranky "scientists" like Crackpot Kracklauer who, without hesitating, dismisses QM & SR/GR as a "foundation" for their "next paradigm" in physics.

Now, if we continue the parallel; it's easy to see that IF Einstein would have had the same "mentality" as Crackpot Kracklauer and billschnieder - he would NOT have been successful in formulating the theory of relativity.

A real crackpot would have started his work, in the beginning of the 1900-th century, by stating:

It's not plausible (I can feel it in my gut!), to imagine bodies affecting each other thru vacuum at a distance! Therefore I shall prove that the mathematical genius Isaac Newton made a terrible mistake when he used a comma instead of a vertical bar, and consequently, Newton's law of universal gravitation is all false. Gravity does not exist. Period.

I shall also prove that there are no experiments proving the existence of Newton's law that has closed all loopholes simultaneously, and there never will be.

I don't know about you, but to me - this is all pathetic. It's clear that billschnieder, with Crackpot Kracklauer as the main source of inspiration, are undoubtedly arguing along these cranky lines above. And to some extent, so does ThomasT, even if he has changed attitude lately.

So I agree, Bell's Theorem can very well be the sign for the "Next Einstein" to start working on an 'extension' to QM & SR/GR, that would make them 100% compatible, and besides this, also provide a mechanism for what we see in current theories and thousands of performed experiments.

This "Next Einstein" must without any doubts include ALL THE WORK OF THE GRANDFATHERS, since in all history of science THIS HAS ALWAYS BEEN THE CASE.

Looking for commas and vertical bars is a hilarious permanent dead-end.
 
  • #983
DevilsAvocado said:
my_wan, could we make a parallel to the situation when Albert Einstein, over a hundred years ago, started to work on his theory of relativity? (Note, I'm not saying that you are Einstein! :smile:)

Einstein did not reject the work of Isaac Newton. In most ordinary circumstances, Newton's law of universal gravitation is perfectly valid, and Einstein knew that. The theory of relativity is 'merely' an 'extension' to extreme situations, where we need finer 'instrumentation'.

I have not provided any well defined mechanisms to equate in such a way. Certainly any such future models can't simply reject the standard model on the grounds of some claim of ontological 'truth'. That is raw crackpottery, even if they are right in some sense. There's a term for that: "not even wrong".

The notion that a particular ontological notion of realism, predicated on equating properties with localized things (localized not meant in a FTL sense here), can be generalized over the entire class called realism simply exceeds what the falsification of that one definition, with it's ontological predicates, justifies.

The individual issues I attempted to discuss were considered incomprehensible when viewed from an ontological perspective they weren't predicated on. Well duh, no kidding. I only hoped to get some criticism on the points, irrespective of what it entailed in terms of realism, to help articulates such issues more clearly. But so long as responses are predicated on some singular ontological notion of realism, as if it fully defined "realism", the validity of BI within that ontological context insures the discussion will go nowhere. I'll continue to investigate such issues myself.

My core point, the overgeneralization of BI local realism to all realism classes, remains valid. Being convinced of the general case by a proof of a limited case is, at a fundamental level, tantamount to proof by lack of evidence. It is therefore invalid, but might not be wrong. I certainly haven't demonstrated otherwise.
 
  • #984
my_wan said:
I have not provided any well defined mechanisms to equate in such a way. Certainly any such future models can't simply reject the standard model on the grounds of some claim of ontological 'truth'. That is raw crackpottery, even if they are right in some sense. There's a term for that: "not even wrong".

I agree, I agree very much. I think your 'agenda' is interesting and healthy. billschnieder on the other hand... well, those words are not allowed here...
 
  • #985
ThomasT said:
JesseM, regarding intellectual humility, don't ever doubt that I'm very thankful that there are educated people like you and DrC willing to get into the details, and explain your current thinking to feeble minded laypersons, such as myself, who are interested in and fascinated by various physics conundrums.

Welcome to club ThomasT! I'm glad that you have finally stepped down from the "sophisticated" throne and become an open-minded "wonderer" as many others in this thread, with respect for professionals with much greater knowledge. :wink:

ThomasT said:
You've said that Bell's(2) isn't about entanglement.

No, JesseM didn't say that - I made that laymanish simplification. JesseM wanted more details:

JesseM said:
Basically I'd agree, although I'd make it a little more detailed: (2) isn't about entanglement, it's about the probabilities for different combinations of A and B (like A=spin-up and B=spin down) for different combinations of detector settings a and b (like a=60 degrees, b=120 degrees), under the assumption that there is a perfect correlation between A and B when both sides use the same detector setting, and that this perfect correlation is to be explained in a local realist way by making use of hidden variable λ.

The key is: Bell's (2) is about perfect correlation, explained in a local realist way, using the Hidden variable λ.

ThomasT said:
I understand the proofs of BIs. What I don't understand is why nonlocality or ftl are seriously considered in connection with BI violations and used by some to be synonymous with quantum entanglement.

The evidence supports Bell's conclusion that the form of Bell's (2) is incompatible with qm and experimental results. But that's not evidence, and certainly not proof, that nature is nonlocal or ftl. (I think that most mainstream scientists would agree that the assumption of nonlocality or ftl is currently unwarranted.) I think that a more reasonable hypothesis is that Bell's (2) is an incorrect model of the experimental situation.

...

Why doesn't the incompatibility of Bell's (2) with qm and experimental results imply nonlocality or ftl? Stated simply by DA, and which you (and I) agree with:

ThomasT, I see you and billschnieder spend hundreds of posts in trying to disprove Bell's (2) with various farfetched arguments, believing that if Bell's (2) can be proven wrong – then Bell's Theorem and all other work done by Bell will go down the drain, including nonlocality.

I'm only a layman, but I think this is terrible wrong, and I think I can prove it to you in a very simple way.

But first, let's start from the beginning – to be sure that we are indeed talking about the same matters:

After a long debate between Albert Einstein and Niels Bohr, about the uncertain nature of QM, Einstein finally formulated the EPR paradox in 1935 (together with Boris Podolsky and Nathan Rosen).

The aim of the EPR paradox was to show that there was a preexisting reality at the microscopic QM level - that the QM particles indeed had a real value before any measurements were performed (thus disproving Heisenberg uncertainty principle HUP).

To make the EPR paper extremely short; If we know the momentum of a particle, then by measuring the position on a twin particle, we would know both momentum & position for a single QM particle - which according to HUP is impossible information, and thus Einstein had proven QM to be incomplete ("God does not play dice").

Okay? Do you agree?


Einstein & Bohr could never solve this dispute between them as long as they lived (which bothered Bohr throughout his whole life). And as far as I understand, Einstein in his last years, became more at 'unease' with the signs of nonlocality, than the original question of the uncertain nature of QM.

Thirty years after the publication of the EPR paradox, John Bell entered the scene. To my understanding, Bell was hoping that Einstein was right, but as the real scientist as he was, he didn't hesitate to publish what he had found – even if this knowledge was a contradiction to his own 'personal taste'.

In the original paper from 1964, Bell formulates in Bell's (2) the mathematical probabilities representing the vital assumptions made by Einstein in 1949, on the EPR paradox:

"But on one supposition we should, in my opinion, absolutely hold fast: the real factual situation of system S2 is independent of what is done with system S1, which is spatially separated from the former."

In Bell's (3) he makes an equal QM expectation value, and then he states, in third line after Bell's (2):

"BUT IT WILL BE SHOWN THAT THIS IS NOT POSSIBLE"

(my caps+bold)​

Do you understand why we get upset when you and billschnieder argue the way you do? You are urging PF users to read cranky papers - while you & billschnieder obviously hasn’t read, or understand, the original Bell paper that this is all about??

Do you really think that John Bell was incapable of formulating the probabilities for getting spin up/down from a local preexisting hidden variable? Or, the odds of getting a red/blue card out of a box? If applying Bell's (2) on the "card trick" we would get 0.25, according to billschnieder, instead of 0.5!? The same man who undoubtedly discovered something that both geniuses Albert Einstein and Niels Bohr missed completely? Do you really think that this is a healthy non-cranky argument to spend hundreds of posts on??!?

Never mind. Forget everything you have (not) "learned". Forget everything and start from scratch. Because now I'm going to show you that there is a problem with locality in EPR, with or without Bell & BI. And we are only going to use your personal favorite – Malus' law.

(Hoping that you didn’t had too many hotdogs & beers tonight? :rolleyes:)



Trying to understand nonlocality - only with Malus' law, and without BI!

Malus' law: I = I0 cos2 θi

Meaning that the intensity (I) is given by the initial intensity (I0) multiplied by cos2 and the angle between the light’s initial polarization direction and the axis of the polarizer (θi).

Translated to QM and one single photon, we get the probability for getting thru the polarizer in cos2i)

If 6 photons have polarization direction 0º, we will get these results at different polarizer angles:

Code:
[B]Angle	Perc.	Result[/B]
----------------------
0º	100%	111111
22.5º	85%	111110
45º	50%	111000
67.5º	15%	100000
90º	0%	000000

1 denotes the photon got thru and 0 denotes stopped. As you can see, this is 100% compatible to Malus' law and the intensity of polarized light.

In experiments with entangled photons, the parameters are tuned and adjusted to the laser polarization, to create the state |ΨEPR> and the coincidence count for N(0º,0º) and N(90º,90º) and N(45º,45º) is checked to be accurate.

As you see, not one word about Bell or BI so far, it’s only Malus' law and EPR.

Now, if we run 6 entangled photons for Alice & Bob with both polarizers at 0º, we will get something like this:

Code:
[B]A(0º) 	B(0º)	Correlation[/B]
---------------------------
101010	101010	100%

The individual outcome for Alice & Bob is perfectly random. It's the correlation that matters. If we run the same test once more, we could get something like this:

Code:
[B]A(0º) 	B(0º)	Correlation[/B]
---------------------------
001100	001100	100%

This time we have different individual outcome, but the same perfect correlation statistics.

The angle of the polarizers is not affecting the result, as long as they are the same. If we set both to 90º we could get something like this:

Code:
[B]A(90º) 	B(90º)	Correlation[/B]
---------------------------
110011	110011	100%

Still 100% perfect correlation.

(In fact, the individual outcome for Alice & Bob can be any of the 64 combinations [26] on any angle, as long as they are identical, when the two angles are identical.)

As you might have guessed, there is absolutely no problem to explain what is happening here by a local "phenomena". I can write a computer program in 5 min that will perfectly emulate this physical behavior. All we have to do is give the entangled photon pair the same random preexisting local value, and let them run to the polarizers. No problem.

Now, let's make things a little more 'interesting'. Let's assume that Alice polarizer will stay fixed at angle 0º and Bob's polarizer will have any random value between 0º and 90º. To not make things too complicated at once, we will only check the outcome when Alice get a photon thru = 1.

What will the probabilities be for Bob, at all these different angles? Is it at all possible to calculate? Can we make a local prediction?? Well YES!

Code:
[B]Bob	Corr.	Result[/B]
----------------------
0º	100%	111111
22.5º	85%	111110
45º	50%	111000
67.5º	15%	100000
90º	0%	000000

WE RUN MALUS' LAW! And it works!

Obviously at angles 0º and 90º the individual photon outcome must be exactly as above. For any other angle, the individual photon outcome is random, but the total outcome for all 6 photons must match Malus' law.

But ... will this work even when we count Alice = 0 at 0º ... ??

Sure! No problem!

All we have to do is to check locally if Alice is 0 or 1, and mirror the probabilities according to Malus' law. If Alice = 0 we will get this for Bob:

Code:
[B]Bob	Corr.	Result[/B]
----------------------
0º	100%	000000
22.5º	85%	100000
45º	50%	111000
67.5º	15%	111110
90º	0%	111111

Can I still write a computer program that perfectly emulates this physical behavior? Sure! It will maybe take 15 min this time, but all I have to do is to assign Malus' law locally to Bob's photon, in respect of Alice random value 1/0, and let the photons run to the polarizers. No problem.

We should note that Bob's photons in this scenario will not have a preexisting local value, before leaving the common source. All Bob's photons will get is Malus' law, 'adapted' to Alice preexisting local value 1 or 0.

I don't know if this qualify for local realism, but it would work mathematically, and could be emulated perfectly in a computer program.

And please note: Not one word about Bell or BI so far, only Malus' law and EPR.


BUT NOW IT'S TIME FOR THAT 'LITTLE THING' THAT CHANGES EVERYTHING! :devil:

Are you ready ThomasT? This is the setup:

Alice & Bob are separated by 20 km. The source creating entangled photon pairs is placed in the middle, 10 km from Alice and 10 km from Bob.

The polarizers at Alice & Bob are rotating independently random at very high speed between 0º and 90º.

It takes light 66 microseconds (10-6) to travel 20 km (in vacuum) from Alice to Bob.

The total time for electronic and optical processes in the path of each photon at the detector is calculated to be approximately 100 nanoseconds (10-9).

Now the crucial question is - Can we make anything at the local source to 'save' the statistics at polarizers separated by 20 km? Can we use any local hidden variable or formula, or some other unknown 'magic'?? Could we maybe use the 'local' Malus' law even in this scenario to 'fix it'??

I say definitely NO. (What would that be?? A 20 km long Bayesian-probability-chain-rule? :eek:)

WHY!?

BECAUSE WE DO NOT KNOW WHAT ANGLE THE TWO POLARIZERS SEPARATED BY 20 KM WILL HAVE UNTIL THE LAST NANOSECONDS AND IT TAKES 66 MICROSECONDS FOR ALICE & BOB TO EXCHANGE ANY INFORMATION.

ThomasT, I will challenge you on the 'easiest' problem we have here - to get a perfect correlation (100%) when Alice & Bob measures the entangled photon pairs at the same angle. That's all.

Could you write a simple computer program, or explain in words and provide some examples of the outcome for 6 pair of photons, as I have done above, how this could be achieved without nonlocality or FTL?

(Philosophical tirades on "joint probabilities" etc are unwarranted, as they don't mean anything practical.)

If you can do this, and explain it to me, I promise you that I will start a hunger strike outside the door of the Royal Swedish Academy of Sciences, until you get the well deserved Nobel Prize in Physics!

AND REMEMBER – I HAVE NOT MENTIONED ONE WORD ABOUT BELL OR BI !

Good luck!


P.S. Did I say that you are not allowed to get perfect correlation (100%) anywhere else in your example, when the angles differ? And "weird" interpretations don’t count. :biggrin:
 
Last edited:
  • #986
question about the double split experiment.

So detectors placed at the slits create the wave function collapse of the photon! why doesn't the actual slit experiment itself create the wave function collapse?
 
  • #987
I'm curious if it's possible to create polarization entangled beams in which the each beam can have some statistically significant non-uniform polarization. The shutter idea I suggested breaks the inseparable condition, collapses the wavefunction so to speak. Yet still might be worth looking at in some detail.

Anybody know what kind of effects a PBS would have on the polarization of a polarized beam that is passed through it? Would each resulting beam individually retain some preferential polarization?

Rabbitrabbit,
Not real sure what your asking. It appears a bit off topic in this thread. The interference pattern, locations and distribution of the individual points of light, doesn't tell you which hole the photons came through. So how can it collapse the wave function of something that can't be known from the photon detections? This thread is probably not the best place for such a discussion.
 
  • #988
Does BI violations require an oversampling, relative to the (max) classical limit, of the "full Universe" to account for? This may be an entirely separate argument from the local or realism issues, but the answer is no. Here's why.

Pick any offset, such as 22.5, and note the over-count relative to the (max) classical limit, 10.36% in this case. Now for every unique angle in which a coincidences exceed the classical limit, there exist a one to one correspondence to a unique angle that undercounts the (max) classical limit by that same percentage. In the example given it's 67.5. Quantitatively equivalent angles, of course, exist in each quadrant of the coordinate system, but a truly unique one to one correspondence exist in each quadrant alone, of a given coordinate choice.

This, again, doesn't involve or make any claims about the capacity for a classical model to mimic product state statistics. What it does prove is that a coincidence average over all possible settings, involving BI violations, does not exceed the coincidence average, over all settings, given a classical 'maximum' per Bell's ansatz. They are equal averaged over all settings.

The point of this is that I agree that the "unfair sample" argument isn't valid. By this I mean that the notion that you can account for the observed relative variations by assuming that a sufficient portion of the events go undetected is incongruent with experimental constraints. However, other forms of sampling arguments can also in general be defined as an "unfair sampling" argument. Which don't necessarily involve missing detections. Thus it may not always be valid to invoke the illegitimacy of the missing detection "unfair sampling" argument to every "fair sampling" argument.

In fact the only way to rule out all possible forms of a sampling argument is to demonstrate that the sum of all coincidences over all possible detector settings exceeds the classical maximum limit. Yet the above argument proves they are exactly equivalent in this one respect.

Any objections?
 
  • #989
DevilsAvocado said:
... ThomasT, I will challenge you on the 'easiest' problem we have here - to get a perfect correlation (100%) when Alice & Bob measures the entangled photon pairs at the same angle. That's all.

Could you write a simple computer program, or explain in words and provide some examples of the outcome for 6 pair of photons, as I have done above, how this could be achieved without nonlocality or FTL?
...
If you can do this, and explain it to me, I promise you that I will start a hunger strike outside the door of the Royal Swedish Academy of Sciences, until you get the well deserved Nobel Prize in Physics!


OMG! I have to give the Nobel to myself! :smile:

Sorry... :redface:

All we have to do is to assign Malus' to both Alice & Bob (mirrored randomly 1/0), and this will work fine for checking perfect correlation (100%) at the same angle:

Code:
[B]Angle	Bob	Alice	Correlation[/B]
-----------------------------------
0º	111111	111111	100%
22.5º	111110	111110	100%
45º	111000	111000	100%
67.5º	100000	100000	100%
90º	000000	000000	100%


The 'problems' only occurs when we have different angles for Alice & Bob (except 0º/90º):

Code:
[B]A 67.5º	B 22.5º	Correlation[/B]
---------------------------
100000	111110	33%

Here the difference is 67.5 - 22.5 = 45º and the correlation should be 50%, and this is also depends on the individual outcome, since this will give 0% correlation (instead of the correct 50%):

Code:
[B]A 67.5º	B 22.5º	Correlation[/B]
---------------------------
000001	111110	0%


Well, something more to consider... it’s apparently possible to solve the perfect correlation locally... and maybe that’s what Bell has been telling us all the time! :biggrin:

Sorry again. :blushing:
 
Last edited:
  • #990
my_wan said:
... In fact the only way to rule out all possible forms of a sampling argument is to demonstrate that the sum of all coincidences over all possible detector settings exceeds the classical maximum limit. Yet the above argument proves they are exactly equivalent in this one respect.

Any objections?

The "fair sampling assumption" is also called the "no-enhancement assumption", and I think that is a much better term. Why should we assume that nature has an unknown "enhancement" mechanism that filter out those photons, and only those, who would give us a completely different experimental result!?

Wouldn’t that be an even stranger "phenomena" than nonlocality?:bugeye:?

And the same logic goes for "closing all loopholes at once". Why nature should chose to expose different weaknesses in different experiments? That is closed separately??

It doesn’t make sense.
 
  • #991
Here's a particular case were the fair sampling of full Universe objection may not be valid, in the thread:
https://www.physicsforums.com/showthread.php?t=369286"
DrChinese said:
Strangely, and despite the fact that it "shouldn't" work, the results magically appeared. Keep in mind that this is for the "Unfair Sample" case - i.e. where there is a subset of the full universe. I tried for 100,000 iterations. With this coding, the full universe for both setups - entangled and unentangled - was Product State. That part almost makes sense, in fact I think it is the most reasonable point for a full universe! What doesn't make sense is the fact that you get Perfect Correlations when you have random unknown polarizations, but get Product State (less than perfect) when you have fixed polarization. That seems impossible.

However, by the rules of the simulation, it works.

Now, does this mean it is possible to violate Bell? Definitely not, and they don't claim to. What they claim is that a biased (what I call Unfair) sample can violate Bell even though the full universe does not. This particular point has not been in contention as far as I know, although I don't think anyone else has actually worked out such a model. So I think it is great work just for them to get to this point.

Here "unfair sampling" was equated with a failure to violate BI, while the "full universe" was invoked to differentiate between BI and the and a violation of BI. Yet, as I demonstrated in https://www.physicsforums.com/showthread.php?p=2788956#post2788956", the BI violations of QM, on average of all setting, does not contain a "full universe" BI violation.

Let's look at a more specific objection, to see why the "fair sampling" objection may not valid:
DrChinese said:
After examining this statement, I believe I can find an explanation of how the computer algorithm manages to produce its results. It helps to know exactly how the bias must work. :smile: The De Raedt et al model uses the time window as a method of varying which events are detected (because that is how their fair sampling algorithm works). That means, the time delay function must be - on the average - such that events at some angle settings are more likely to be included, and events at other angle setting are on average less likely to be included.

Here it was presented 'as if' event detections failures represented a failure to detect photons. This is absolutely not the case. The detection accuracy, of photons, remained constant throughout. Only the time window in which they were detected varied, meaning there was no missing detections, only a variation of whether said detections fell within a coincidence window or not. Thus the perfectly valid objection to using variations in detection efficiency (unfair sampling) does not apply to all versions of unfair sampling. The proof provided in https://www.physicsforums.com/showthread.php?p=2788956#post2788956" tells us QM BI violations are not "full universe" BI violation either.
 
Last edited by a moderator:
  • #992
DevilsAvocado said:
The "fair sampling assumption" is also called the "no-enhancement assumption", and I think that is a much better term. Why should we assume that nature has an unknown "enhancement" mechanism that filter out those photons, and only those, who would give us a completely different experimental result!?

Wouldn’t that be an even stranger "phenomena" than nonlocality?:bugeye:?

And the same logic goes for "closing all loopholes at once". Why nature should chose to expose different weaknesses in different experiments? That is closed separately??

It doesn’t make sense.

That depends on what you mean by "enhancement". If by "enhancement" you mean that a summation of all possible or "full universe" choice of measurements settings leads to an excess of detection events, then yes, I would agree. But the point of post #988 was that the BI violations defined by QM, and measured, do not "enhance" detection totals over the classical limit when averaged over the "full universe" of detector settings.

That is that for ever detector setting choice which exceeds the classical coincidence limit, there provably exist another choice where coincidences fall below classical coincidence limit, by the exact same amount.

22.5 and 67.5 is one pair such that cos^2(22.5) + cos^2(67.5) = 1. These detection variances are such that there exist an exact one to one ratio between overcount angles and quantitatively identical undercount angles, such that averaged over all possible setting QM and the classical coincidence limits exactly match.
 
  • #993
To make the difference between an experimentally invalid "unfair sampling" argument, involving detection efficiencies, and more general "fair sampling" arguments more clear, consider:

You have a single pair of photons. They are both detected within a time window, thus a coincidence occurs. Now suppose you chose different settings and detected both photons, but they didn't fall within the coincidence window. Now in both cases you had a 100% detection rate, so "fair sampling", defined in terms of detections efficiencies, is absolutely invalid. Yet, assuming the case defined holds, this was a "fair sampling" argument that did not involve detection efficiencies, and can not be ruled out by perfectly valid arguments against "fair sampling" involving detection efficiencies.
 
  • #994
There's a comparison I'd like to make between the validity of BI violations applied to realism and the validity of objections to fair sampling arguments.

When I claim that the implications of BI are valid but are often overgeneralized, the exact same thing happened, in which the demonstrable invalidity of "unfair sampling", involving detection efficiencies, is overgeneralized to improperly invalidate all "fair sampling" arguments.

The point here is that you are treading in dangerous territory when you attempt to apply a proof involving a class instance to make claims about an entire class. Doing so technically invalidates the claim, whether you are talking about the "fair sampling" class or the "realism" class. Class instances by definition contains constraints not shared by the entire class, and the set of all instances of a class remains undefined within science.

Of course you can try and object to my refutation of the invalidity of "fair sampling" when such "fair sampling" doesn't involve less than perfect detection efficiencies. :biggrin:
 
  • #995
my_wan said:
There's a comparison I'd like to make between the validity of BI violations applied to realism and the validity of objections to fair sampling arguments.

When I claim that the implications of BI are valid but are often overgeneralized, the exact same thing happened, in which the demonstrable invalidity of "unfair sampling", involving detection efficiencies, is overgeneralized to improperly invalidate all "fair sampling" arguments.

The point here is that you are treading in dangerous territory when you attempt to apply a proof involving a class instance to make claims about an entire class. Doing so technically invalidates the claim, whether you are talking about the "fair sampling" class or the "realism" class. Class instances by definition contains constraints not shared by the entire class, and the set of all instances of a class remains undefined within science.

Of course you can try and object to my refutation of the invalidity of "fair sampling" when such "fair sampling" doesn't involve less than perfect detection efficiencies. :biggrin:


Dear my_wan;

This is very interesting to me. I would love to see some expansion on the points you are advancing, especially about this:

Class instances by definition contains constraints not shared by the entire class, and the set of all instances of a class remains undefined within science.

Many thanks,

JenniT
 
  • #996
You can find examples in the set theory.
This is a tricky subject closely connected to the Axiom of Choice (if I understood the idea correctly).

For example, you can write any real number, given you as input. However, as power of continuum is higher than the power of integers, there are infinitely many real numbers, which can't be given and an example. You can even provide a set of real numbers, defined in a tricky way so you can't give any examples of the numbers, belonging to that set, even that set covers [0,1] almost everywhere and it has infinite number of members!

Imagine: set of rational numbers. For example, 1/3
Set of transendent numbers, for example pi.
Magic set I provide: no example can be given

It becomes even worse when some properties belong exclusively to that 'magic' set. See Banach-Tarski paradox as an example. No example of that weird splitting can be provided (because if one could do it then the theorem could be proven without AC)
 
  • #997
my_wan said:
... Here it was presented 'as if' event detections failures represented a failure to detect photons. This is absolutely not the case. The detection accuracy, of photons, remained constant throughout. Only the time window in which they were detected varied, meaning there was no missing detections, only a variation of whether said detections fell within a coincidence window or not. Thus the perfectly valid objection to using variations in detection efficiency (unfair sampling) does not apply to all versions of unfair sampling. The proof provided in https://www.physicsforums.com/showthread.php?p=2788956#post2788956" tells us QM BI violations are not "full universe" BI violation either.

Have seen the code?

In the case of the De Raedt Simulation there is no "time window", only a pseudo-random number in r0:

6oztpt.png


I don’t think this has much to do with real experiments – this is a case of trial & error and "fine-tuning".

One thing that I find 'peculiar' is the case that the angles of the of the detectors are not independently random, angle1 is random but angle2 is always at fixed value offset...?:confused:?

To me this does not look like the "real thing"...

Code:
' Initialize the detector settings used for all trials for this particular run - essentially what detector settings are used for "Alice" (angle1) and "Bob" (angle2)
If InitialAngle = -1 Then
  angle1 = Rnd() * Pi ' set as being a random value
  Else
  angle1 = InitialAngle ' if caller specifies a value
  End If
angle2 = angle1 + Radians(Theta) ' fixed value offset always
angle3 = angle1 + Radians(FixedOffsetForChris) ' a hypothetical 3rd setting "Chris" with fixed offset from setting for particle 1, this does not affect the model/function results in any way - it is only used for Event by Event detail trial analysis

...

For i = 1 To Iterations:

  If InitialAngle = -2 Then ' SPECIAL CASE: if the function is called with -2 for InitialAngle then the Alice/Bob/Chris observation settings are randomly re-oriented for each individual trial iteration.
    angle1 = Rnd() * Pi ' set as being a random value
    angle2 = angle1 + Radians(Theta) ' fixed value offset always
    angle3 = angle1 + Radians(FixedOffsetForChris) ' a hypothetical 3rd setting "Chris" with fixed offset from setting for particle 1, this does not affect the model/function results in any way - it is only used for Event by Event detail trial analysis
    End If

...
 
Last edited by a moderator:
  • #998
Dmitry67 said:
You can find examples in the set theory.
This is a tricky subject closely connected to the Axiom of Choice (if I understood the idea correctly).

For example, you can write any real number, given you as input. However, as power of continuum is higher than the power of integers, there are infinitely many real numbers, which can't be given AS an example. You can even provide a set of real numbers, defined in a tricky way so you can't give any examples of the numbers, belonging to that set, even IF that set covers [0,1] almost everywhere and it has infinite number of members!

Imagine: set of rational numbers. For example, 1/3
Set of transendent numbers, for example pi.
Magic set I provide: no example can be given

It becomes even worse when some properties belong exclusively to that 'magic' set. See Banach-Tarski paradox as an example. No example of that weird splitting can be provided (because if one could do it then the theorem could be proven without AC)

Dear Dmitry67, many thanks for quick reply. I put 2 small edits in CAPS above.

Hope that's correct?

But I do not understand your "imagine" ++ example.

Elaboration in due course would be nice.

Thank you,

JenniT
 
  • #999
my_wan said:
That depends on what you mean by "enhancement".
It means exactly the same as "fair sampling assumption": That the sample of detected pairs is representative of the pairs emitted.

I.e. we are not assuming that nature is really a tricky bastard, by constantly not showing us the "enhancements" that would spoil all EPR-Bell experiments, all the time. :biggrin:

my_wan said:
the "full universe" of detector settings.
What does this really mean??

my_wan said:
That is that for ever detector setting choice which exceeds the classical coincidence limit, there provably exist another choice where coincidences fall below classical coincidence limit, by the exact same amount.

22.5 and 67.5 is one pair such that cos^2(22.5) + cos^2(67.5) = 1. These detection variances are such that there exist an exact one to one ratio between overcount angles and quantitatively identical undercount angles, such that averaged over all possible setting QM and the classical coincidence limits exactly match.

my_wan, no offence – but is this the "full universe" of detector settings?:bugeye:?

I don’t get this. What on Earth has cos^2(22.5) + cos^2(67.5) = 1 to do with the "fair sampling assumption"...?

Do you mean that we are constantly missing photons that would, if they were measured, always set correlation probability to 1?? I don’t get it...
 
  • #1,000
my_wan said:
... You have a single pair of photons. They are both detected within a time window, thus a coincidence occurs. Now suppose you chose different settings and detected both photons, but they didn't fall within the coincidence window. Now in both cases you had a 100% detection rate, so "fair sampling", defined in terms of detections efficiencies, is absolutely invalid. Yet, assuming the case defined holds, this was a "fair sampling" argument that did not involve detection efficiencies, and can not be ruled out by perfectly valid arguments against "fair sampling" involving detection efficiencies.

I could be wrong (as last time when promising a Nobel o:)). But to my understanding, the question of "fair sampling" is mainly a question of assuming – even if we only have 1% detection efficiency – that the sample we do get is representative of all the pairs emitted.

To me, this is as natural as when you grab hand of white sand on a white beach, you don’t assume that every grain of sand that you didn’t get into your hand... is actually black! :wink:
 

Similar threads

Replies
45
Views
3K
Replies
4
Views
1K
Replies
18
Views
3K
Replies
6
Views
2K
Replies
2
Views
2K
Replies
100
Views
10K
Back
Top