Is action at a distance possible as envisaged by the EPR Paradox.

Click For Summary
The discussion centers on the possibility of action at a distance as proposed by the EPR Paradox, with participants debating the implications of quantum entanglement. It is established that while entanglement has been experimentally demonstrated, it does not allow for faster-than-light communication or signaling. The conversation touches on various interpretations of quantum mechanics, including the Bohmian view and many-worlds interpretation, while emphasizing that Bell's theorem suggests no local hidden variables can account for quantum predictions. Participants express a mix of curiosity and skepticism regarding the implications of these findings, acknowledging the complexities and ongoing debates in the field. Overall, the conversation highlights the intricate relationship between quantum mechanics and the concept of nonlocality.
  • #931
billschnieder said:
1) Non-detection is present in every bell-test experiment ever performed
2) The relevant beam-splitters are real ones used in real experiments, not idealized ones that have never and can never be used in any experiment.

Bell should still have considered non-detection as one of the outcomes in addition to (-1, and +1). If you are right that non-detection is not an issue, the inequalities derived by assuming there are three possible outcomes right from the start, should also be violated. But if you do this and end up with an inequality that is no longer violated, then non-detection IS an issue.

Did you read what I said? I said non-detection DO matter in experiments. But not in a theoretical proof such as Bell.
 
Physics news on Phys.org
  • #932
DrChinese said:
Did you read what I said? I said non-detection DO matter in experiments. But not in a theoretical proof such as Bell.

Therefore correlations observed in real experiments in which non-detection matters can not be compared to idealized theoretical proofs in which non-detection was not considered since those idealized theoretical proofs made assumptions that will never be fulfilled in any real experiments.

QM works because it is not an idealized theoretical proof, it actually incorporates and accounts for the experimental setup. It is therefore not surprising that QM and real experiments agree while Bell's inequalities are the only ones left hanging in the cold.
 
Last edited:
  • #933
billschnieder said:
Correct.
In Bell's treatment, only case 4 is considered, the rest are simply ignored or assumed to not be possible.

This is not even wrong. It is outrageous. Case 1 corresponds to two photons leaving the source but none detected, cases 2-3 correspond to two photons leaving the source and only one detected on any of the channels. In Bell test experiments, coincidence-circuitry eliminates 1-3 from consideration because there is no way in the inequalities to include that information. The inequalities are derived assuming that only case 4 is possible.

Oh really. Cases 2 and 3 are quite observed and recorded. They are usually excluded from counting because of the Coincidence Time Window, this is true. But again, this is just a plain misunderstanding of the process. You cannot actually have the kind of stats you describe because the probability p(1)=p(2)*p(3)=p(2)^2 and p(4)=1 - p(1). Now this is approximate and there hypothetically could be a force or something that causes deviation. But as mentioned, that would require a physically testable hypothesis.

As far as I can see, there is currently very high detection efficiencies. From Zeilinger et al:

These can be characterized individually by measured visibilities, which were: for the source, ≈ 99% (98%) in the H/V (45°/135°) basis; for both Alice’s and Bob’s polarization analyzers, ≈ 99%; for the fibre channel and Alice’s analyzer (measured before each run), ≈ 97%, while the free-space link did not observably reduce Bob’s polarization visibility; for the effect of accidental coinci-dences resulting from an inherently low signal-to-noise ratio (SNR), ≈ 91% (including both dark counts and multipair emissions, with 55 dB two-photon attenuation and a 1.5 ns coincidence window).

Violation by 16 SD over 144 kilometers.
http://arxiv.org/abs/0811.3129

Or perhaps:

(You just have to read this as it addresses much of these issues directly. Suffice it to say that they address the issue of collection of pairs from PDC very nicely.)

Violation by 213 SD.
http://arxiv.org/abs/quant-ph/0303018
 
  • #934
billschnieder said:
Therefore correlations observed in real experiments in which non-detection matters can not be compared to idealized theoretical proofs in which non-detection was not considered since those idealized theoretical proofs made assumptions that will never be fulfilled in any real experiments.

You know, if there were only 1 experiment ever performed you might be correct. But this issue has been raised, addressed and ultimately rejected as on ongoing issue over and over.
 
  • #935
DrChinese said:
Did you read what I said? I said non-detection DO matter in experiments. But not in a theoretical proof such as Bell.
Yes, Bell's proof was just showing that the theoretical predictions of QM were incompatible with the theoretical predictions of local realism, not derive equations that were directly applicable to experiments. Though as I've already said, you can derive inequalities that include detector efficiency as a parameter, and there have been at least a few experiments with sufficiently high detector efficiency such that these inequalities are violated (though these experiments were vulnerable to the locality loophole).

A few papers I came across suggested that experiments which closed both the detector efficiency loophole and the locality loophole simultaneously would likely be possible fairly soon. If someone offered to bet Bill a large sum of money that the results of these experiments would continue to match the predictions of QM (and thus continue to violate Bell inequalities that take into account detector efficiency), would Bill bet against them?
 
  • #936
JesseM said:
If someone offered to bet Bill a large sum of money that the results of these experiments would continue to match the predictions of QM (and thus continue to violate Bell inequalities that take into account detector efficiency), would Bill bet against them?
What has this got to do with anything. If there was a convincing experiment which fulfilled all the assumptions in Bell's derivation, I would change my mind. I am after the truth, I don't religiously follow one side just because I have invested my whole life to it. So why would I want to bet at all.

I am merely pointing out here that the so-called proof of non-locality is unjustified, which is not the same as saying there will never be any proof. it seems from your suggestion thatyou are already absolutely convinced of non-locality, so would you bet a large sum of money against the idea that non-locality will be found to be a serious misunderstanding?
 
  • #937
BTW,
Even if an experimenter ensured 100% detection efficiency, they still have to ensure cyclicity in their data, as illustrated https://www.physicsforums.com/showpost.php?p=2766980&postcount=110

Surprisingly, you both artfully avoid addressing this example which clearly shows a mechanism for violating the inequalities that has nothing to do with detection efficiency.

Bell derives inequalities by assuming that a single particle is measured at multiple angles. Experiments are performed in which many different particles are measured at multiple angles. Apples vs oranges. Comparing the two is equivalent to comparing the average height obtained by measuring a single person's height 1000000 times, with the average height obtained by measuring 1000000 different people each exactly one time.

The point is that certain assumptions are made about the data when deriving the inequalities, that must be valid in the data-taking process. God is not taking the data, so the human experimenters must take those assumptions into account if their data is to be comparable to the inequalities.

Consider a certain disease that strikes persons in different ways depending on circumstances. Assume that we deal with sets of patients born in Africa, Asia and Europe (denoted a,b,c). Assume further that doctors in three cities Lyon, Paris, and Lille (denoted 1,2,3) are are assembling information about the disease. The doctors perform their investigations on randomly chosen but identical days (n) for all three where n = 1,2,3,...,N for a total of N days. The patients are denoted Alo(n) where l is the city, o is the birthplace and n is the day. Each patient is then given a diagnosis of A = +1/-1 based on presence or absence of the disease. So if a patient from Europe examined in Lille on the 10th day of the study was negative, A3c(10) = -1.

According to the Bell-type Leggett-Garg inequality

Aa(.)Ab(.) + Aa(.)Ac(.) + Ab(.)Ac(.) >= -1

In the case under consideration, our doctors can combine their results as follows

A1a(n)A2b(n) + A1a(n)A3c(n) + A2b(n)A3c(n)

It can easily be verified that by combining any possible diagnosis results, the Legett-Garg inequalitiy will not be violated as the result of the above expression will always be >= -1, so long as the cyclicity (XY+XZ+YZ) is maintained. Therefore the average result will also satisfy that inequality and we can therefore drop the indices and write the inequality only based on place of origin as follows:

<AaAb> + <AaAc> + <AbAc> >= -1

Now consider a variation of the study in which only two doctors perform the investigation. The doctor in Lille examines only patients of type (a) and (b) and the doctor in Lyon examines only patients of type (b) and (c). Note that patients of type (b) are examined twice as much. The doctors not knowing, or having any reason to suspect that the date or location of examinations has any influence decide to designate their patients only based on place of origin.

After numerous examinations they combine their results and find that

<AaAb> + <AaAc> + <AbAc> = -3

They also find that the single outcomes Aa, Ab, Ac, appear randomly distributed around +1/-1 and they are completely baffled. How can single outcomes be completely random while the products are not random. After lengthy discussions they conclude that there must be superluminal influence between the two cities.

But there are other more reasonable reasons. Note that by measuring in only two citites they have removed the cyclicity intended in the original inequality. It can easily be verified that the following scenario will result in what they observed:

- on even dates Aa = +1 and Ac = -1 in both cities while Ab = +1 in Lille and Ab = -1 in Lyon
- on odd days all signs are reversed

In the above case
<A1aA2b> + <A1aA2c> + <A1bA2c> >= -3
which is consistent with what they saw. Note that this equation does NOT maintain the cyclicity (XY+XZ+YZ) of the original inequality for the situation in which only two cities are considered and one group of patients is measured more than once. But by droping the indices for the cities, it gives the false impression that the cyclicity is maintained.

The reason for the discrepancy is that the data is not indexed properly in order to provide a data structure that is consistent with the inequalities as derived.Specifically, the inequalities require cyclicity in the data and since experimenters can not possibly know all the factors in play in order to know how to index the data to preserve the cyclicity, it is unreasonable to expect their data to match the inequalities.

For a fuller treatment of this example, see Hess et al, Possible experience: From Boole to Bell. EPL. 87, No 6, 60007(1-6) (2009)

The key word is "cyclicity" here. Now let's look at various inequalities:

Bell's equation (15):
1 + P(b,c) >= | P(a,b) - P(a,c)|
a,b, c each occur in two of the three terms. Each time together with a different partner. However in actual experiments, the (b,c) pair is analyzed at a different time from the (a,b) pair so the bs are not the same. Just because the experimenter sets a macroscopic angle does not mean that the complete microscopic state of the instrument, which he has no control over is in the same state.

CHSH:
|q(d1,y2) - q(a1,y2)| + |q(d1,b2)+q(a1,b2)| <= 2
d1, y2, a1, b2 each occur in two of the four terms. Same argument above applies.

Leggett-Garg:
Aa(.)Ab(.) + Aa(.)Ac(.) + Ab(.)Ac(.) >= -1
 
  • #938
billschnieder said:
What has this got to do with anything. If there was a convincing experiment which fulfilled all the assumptions in Bell's derivation, I would change my mind.
What do you mean by "assumptions", though? Are you just talking about the assumptions about the observable experimental setup, like spacelike separation between measurements and perfect detection of all pairs (or a sufficiently high number of pairs if we are talking about a derivation of an inequality that includes detector efficiency as a parameter)? Or are you including theoretical assumptions like the idea that the universe obeys local realist laws and that there is some set of local variables λ such that P(AB|ab)=P(A|aλ)*P(B|bλ)? Of course Bell would not expect that any real experiment could fulfill those theoretical assumptions, since he believed the predictions of QM were likely to be correct and his proof was meant to be a proof-by-contradiction showing these theoretical assumptions lead to predictions incompatible with QM under the given observable experimental conditions.
billschnieder said:
I am merely pointing out here that the so-called proof of non-locality is unjustified
You can only have "proofs" of theoretical claims, for empirical claims you can build up evidence but never prove them with perfect certainty (we can't 'prove' the Earth is round, for example). Bell's proof is not intended to be a proof that non-locality is real in the actual world, just that local realism is incompatible with QM. Of course you apparently doubt some aspects of this purely theoretical proof, like the idea that in any local realist universe it should be possible to find a set of local variables λ such that P(AB|ab)=P(A|aλ)*P(B|bλ), but you refuse to engage in detailed discussion on such matters. In any case I would say the evidence is strong that QM's predictions about Aspect-type experiments are correct, even if there are a few loopholes like the fact that no experiment has simultaneously closed the detector efficiency and locality loopholes (but again, I think it would be impossible to find a local realist theory that exploited both loopholes in a way consistent with the experiments that have been done so far but didn't look extremely contrived).
billschnieder said:
it seems from your suggestion thatyou are already absolutely convinced of non-locality, so would you bet a large sum of money against the idea that non-locality will be found to be a serious misunderstanding?
Personally I tend to favor the many-worlds interpretation of QM, which could allow us to keep locality by getting rid of the implicit assumption in Bell's proof that every measurement must have a unique outcome (to see how getting rid of this can lead to a local theory with Bell inequality violations, you could check out my post #11 on this thread for a toy model, and post #8 on this thread gives references to various MWI advocates who say it gives a local explanation for BI violations). I would however bet a lot of money that A) future Aspect-type experiments will continue to match the predictions of QM about Bell inequality violations, and B) mainstream physicists aren't going to end up deciding that Bell's theoretical proof is fundamentally flawed and that QM is compatible with a local realist theory that doesn't have any of the kinds of "weird" features that are included as loopholes in rigorous versions of the proof (like many-worlds, or like 'conspiracies' in past conditions that create a correlation between the choice of detector setting and the state of hidden variables at some time earlier than when the choice is made)
 
Last edited:
  • #939
billschnieder said:
BTW,
Even if an experimenter ensured 100% detection efficiency, they still have to ensure cyclicity in their data, as illustrated https://www.physicsforums.com/showpost.php?p=2766980&postcount=110

Surprisingly, you both artfully avoid addressing this example which clearly shows a mechanism for violating the inequalities that has nothing to do with detection efficiency.

Thank you! I was hoping the artistry would show through.

All I can really say is that any local realistic prediction you care to make can pretty well be falsified. On the other hand, any Quantum Mechanical prediction will not. So at the end of the day, your definitional quibbling is not very convincing. All you need to do is define LR so we can test it. Saying "apples and oranges" when it looks like "apples and apples" (since we start with perfect correlations) is not impressive.

So... make an LR prediction instead of hiding.
 
  • #940
24 000 hits and still going... Einstein is probably turning in his grave at the way the EPR argument is still going and going...
 
  • #941
billschnieder said:
BTW,
Even if an experimenter ensured 100% detection efficiency, they still have to ensure cyclicity in their data, as illustrated https://www.physicsforums.com/showpost.php?p=2766980&postcount=110

Surprisingly, you both artfully avoid addressing this example which clearly shows a mechanism for violating the inequalities that has nothing to do with detection efficiency.
I did respond to that post, but I didn't end up responding to your later post #128 on the subject here because before I got to it you said you didn't want to talk to me any more unless I agreed to make my posts as short as you wanted them to be and for me not to include discussions of things I thought were relevant if you didn't agree they were relevant. But since you bring it up, I think you're just incorrect in saying in post #128 that the Leggett-Garg inequality is not intrinsically based on a large collection of trials where on each trial we measure the same system at 2 of 3 possible times (as opposed to measuring two parts of an entangled system with 1 of several possible combinations detector settings as with other inequalities)--see this paper and http://www.nature.com/nphys/journal/v6/n6/full/nphys1641.html which both describe it using terms like "temporal inequality" and "inequality in time", for example. I also found the paper where you got the example with patients from different countries here, they explain in the text around equation (8) what the example (which doesn't match the conditions assumed in the Leggett-Garg inequality) has to do with the real Leggett-Garg inequality:
Realism plays a role in the arguments of Bell and followers because they introduce a variable λ representing an element of reality and then write

\Gamma = &lt;A_a(\lambda)A_b(\lambda)&gt; + &lt;A_a(\lambda)A_c(\lambda)&gt; + &lt;A_b(\lambda)A_c(\lambda)&gt; \, \geq -1 \, (8)

Because no λ exists that would lead to a violation except a λ that depends on the index pairs (a, b), (a, c) and (b, c) the simplistic conclusion is that either elements of reality do not exist or they are non-local. The mistake here is that Bell and followers insist from the start that the same element of reality occurs for the three different experiments with three different setting pairs. This assumption implies the existence of the combinatorial-topological cyclicity that in turn implies the validity of a non-trivial inequality but has no physical basis. Why should the elements of reality not all be different? Why should they, for example not include the time of measurement?
If you look at that first paper they mention on p. 2 that in deriving the inequality we assume each particle is assumed to be in one of the two possible states at all times, so each particle has a well-defined classical "history" of the type shown in the diagram at the top of p. 4, and we assume there is some well-defined probability distribution on the ensemble of all possible classical histories. They also mention at the bottom of p. 3 that deriving the inequality requires that we assume it is possible to make "noninvasive measurements", so the choice of which of 3 times to make our first measurement does not influence the probability of different possible classical histories. They mention that this assumption can also be considered a type of "locality in time". This assumption is a lot more questionable than the usual type of locality assumed when there is a spacelike separation between measurements, since nothing in local realism really should guarantee that you can make "noninvasive" measurements on a quantum system which don't influence its future evolution after the measurement. And this also seems to be the assumption the authors are criticizing in the quote above when they say 'Why should the elements of reality not all be different? Why should they, for example not include the time of measurement?' (I suppose the λ that appears in the equation in the quote represents a particular classical history, so the inequality would hold as long as the probability distribution P(λ) on different possible classical histories is independent of what pair of times the measurements are taken on a given trial.) So this critique appears to be rather specific to the Leggett-Garg inequality, maybe you could come up with a variation for other inequalities but it isn't obvious to me (I think the 'noninvasive measurements' condition would be most closely analogous to the 'no-conspiracy' condition in usual inequalities, but the 'no-conspiracy' condition is a lot easier to justify in terms of local realism when λ can refer to the state of local variables at some time before the experimenters choose what detector settings to use)
 
Last edited by a moderator:
  • #942
JesseM said:
...mainstream physicists aren't going to end up deciding that Bell's theoretical proof is fundamentally flawed and that QM is compatible with a local realist theory that doesn't have any of the kinds of "weird" features that are included as loopholes in rigorous versions of the proof (like many-worlds, or like 'conspiracies' in past conditions that create a correlation between the choice of detector setting and the state of hidden variables at some time earlier than when the choice is made)

True, not likely to change much anytinme soon.

The conspiracy idea (and they go by a lot of names, including No Free Will and Superdeterminism) is not really a theory. More like an idea for a theory. You would need to provide some kind of mechanism, and that would require a deep theoretical framework in order to account for Bell Inequality violations. And again, much of that would be falsifiable. No one actually has one of those on the table. A theory, I mean, and a mechanism.

If you don't like abandoning c, why don't you look at Relational Blockworld? No non-locality, plus you get the added bonus of a degree of time symmetry - no extra worlds to boot! :smile:
 
  • #943
DrChinese said:
... However we won't actually know when case 1 occurs, correct? But unless the chance of 1 is substantially greater than either 2 or 3 individually (and probability logic indicates it should be definitely less - can you see why?), then we can estimate it. If case 4 occurs 50% of the time or more, then 1 should occur less than 10% of the time. This is in essence a vanishing number, since visibility is approaching 90%. That means cases 2 and 3 are happening only about 1 in 10, which would imply case 1 of about 1%.

OMG. I can only hope that < 10% of my brain was 'connected' when asking about this first time... :redface:

OF COURSE we can’t know when case 1 occurs! There are no little "green EPR men" at the source shouting – Hey guys! Here comes entangled pair no. 2345! ARE YOU READY!

Sorry.

DrChinese said:
So you have got to claim all of the "missing" photons are carrying the values that would prove a different result. And this number is not much. I guess it is *possible* if there is a physical mechanism which is responsible for the non-detections, but that would also make it experimentally falsifiable. But you should be aware of how far-fetched this really is. In other words, in actual experiments cases 2 and 3 don't occur very often. Which places severe constraints on case 1.

Far-fetched?? To me this looks like something that Crackpot Kracklauer would use as the final disproval of all mainstream science. :smile:

Seriously, an unknown "physical mechanism" working against the reliability of EPR-Bell experiments!:bugeye:? If someone is using this cantankerous argument as a proof against Bell's Theorem, he’s apparently not considering the consequences...

That "physical mechanism" would need some "artificial intelligence" to pull that thru, wouldn’t it?? Some kind of "global memory" working against the fair sampling assumption – Let’s see now, how many photon pairs are detected and how many do we need to mess up, to destroy this silly little experiment?

Unless this "physical mechanism" also can control the behavior of humans (humans can mess up completely on their own as far as we know), it would need some FTL mechanism to verify that what should be measured is really measured = back to square one!

By the way, what’s the name of this interesting "AI paradigm"...?? :biggrin:


P.S. I checked this pretty little statement "QM works because it is not an idealized theoretical proof" against the http://math.ucr.edu/home/baez/crackpot.html" and it scores 10 points!
"10 points for each claim that quantum mechanics is fundamentally misguided (without good evidence)."
Not bad!
 
Last edited by a moderator:
  • #944
JesseM said:
But since you bring it up, I think you're just incorrect in saying in post #128 that the Leggett-Garg inequality is not intrinsically based on a large collection of trials where on each trial we measure the same system at 2 of 3 possible times (as opposed to measuring two parts of an entangled system with 1 of several possible combinations detector settings as with other inequalities)

As I mentioned to you earlier, it is your opinion here that is wrong. Of course, the LGI applies to the situation you mention, but inequalities of that form were originally proposed by Boole in 1862 (see http://rstl.royalsocietypublishing.org/content/152/225.full.pdf+html) and had nothing to do with time. All that is necessary for it to apply is n-tuples of two valued (+/-) variables. In Boole's case it was three boolean variables. The inequalities result simply from arithmetic, and nothing else.
We perform an experiment in which each data point consists of triples of data such as (i,j,k). Let us call this set S123. We then decide to analyse this data by extracting three data sets of pairs such as S12, S13, S23. What Boole showed was essentially if i, j,k are two valued variables, no matter the type of experiment generating S123, the datasets of pairs extracted from S123 will satisfy the inequalities:

|<S12> +/- <S13>| <= 1 +/- <S23>

You can verify that this is Bell's inequality (replace 1,2,3 with a,b,c,). Using the same ideas he came up with a lot of different inequalities one of which is the LGI, all from arithmetic. So a violation of these inequalities by data, points to mathematically incorrect treatment of the data.

You may be wondering how this applies to EPR. The EPR case involves performing an experiment in which each point is a pair of two-valued outcomes (i,j), let us call it R12. Bell and followers then assume that they should be able to substitute Sij for Rij in the inequalities forgetting that the inequality holds for pairs extracted from triples, but not necessarily for pairs of two-valued data.

Note that each term in Bell's inequality is a pair from a set of triples (a, b, c), but the data obtained from experiments is a pair from a set of pairs.

I also found the paper where you got the example with patients from different countries here,
That is why I gave you the reference before, have you read it, all of it?

So this critique appears to be rather specific to the Leggett-Garg inequality, maybe you could come up with a variation for other inequalities but it isn't obvious to me (I think the 'noninvasive measurements' condition would be most closely analogous to the 'no-conspiracy' condition in usual inequalities, but the 'no-conspiracy' condition is a lot easier to justify in terms of local realism when λ can refer to the state of local variables at some time before the experimenters choose what detector settings to use)
This is not a valid criticism for the following reason:

1) You do not deny that the LGI is a Bell-type inequality. Why do you think it is called that?
2) You have not convincingly argued why the LGI should not apply to the situation described in the example I presented
3) You do not deny the fact that in the example I presented, the inequalities can be violated simply based on how the data is indexed.
4) You do not deny the fact that in the example, there is no way to ensure the data is correctly indexed unless all relevant parameters are known by the experimenters
5) You do not deny that Bell's inequalities involve pairs from a set of triples (a,b,c) and yet experiments involve triples from a set of pairs.
6) You do not deny that it is impossible to measure triples in any EPR-type experiment, therefore Bell-type inequalities do not apply to those experiments. Boole had shown 100+ years ago that you can not substitute Rij for Sij in those type of inequalities.
 
Last edited:
  • #945
I can see why Dirac disdained this kind of pondering, which in the end has little or nothing to do with the work of physics and its applications in life.
 
  • #946
nismaratwork said:
I can see why Dirac disdained this kind of pondering, which in the end has little or nothing to do with the work of physics and its applications in life.
There's a real point here. If the motivation in defining a realistic mechanism is simply to sooth a preexisting philosophical disposition, then such debates have nothing to do with anything. However, some big game remains in physics, perhaps even the biggest available. If farther constraints can be established, or constraints that have been overly generalized better defined, it might turn out to be of value.

As DevilsAvocado put it, little "green EPR men" are not a very satisfactory theoretical construct. Realist want to avoid them with realistic constructs, with varying judgment on what constitutes realistic. Non-realist avoid them by denying the realism of the premise. In the end, the final product needs only a formal description with the greatest possible predictive value, independent of our philosophical sensibilities.
 
  • #947
JesseM said:
I'm glad you're still asking questions, but if you don't really understand the proof, and you do know it's been accepted as valid for years by mainstream physicists, doesn't it make sense to be a little more cautious about making negative claims about it like this one from an earlier post?

ThomasT said:
I couldn't care less if nonlocality or ftl exist or not. In fact, it would be very exciting if they did. But the evidence just doesn't support that conclusion.
I understand the proofs of BIs. What I don't understand is why nonlocality or ftl are seriously considered in connection with BI violations and used by some to be synonymous with quantum entanglement.

The evidence supports Bell's conclusion that the form of Bell's (2) is incompatible with qm and experimental results. But that's not evidence, and certainly not proof, that nature is nonlocal or ftl. (I think that most mainstream scientists would agree that the assumption of nonlocality or ftl is currently unwarranted.) I think that a more reasonable hypothesis is that Bell's (2) is an incorrect model of the experimental situation.

Which you seem to agree with:
JesseM said:
It (the form of Bell's 2) shows how the joint probability can be separated into the product of two independent probabilities if you condition on the hidden variables ?. So, P(AB|ab?)=P(A|a?)*P(B|b?) can be understood as an expression of the locality condition. But he obviously ends up proving that this doesn't work as a way of modeling entanglement...it's really only modeling a case where A and B are perfectly correlated (or perfectly anticorrelated, depending on the experiment) whenever a and b are the same, under the assumption that there is a local explanation for this perfect correlation (like the particles being assigned the same hidden variables by the source that created them).

Why doesn't the incompatibility of Bell's (2) with qm and experimental results imply nonlocality or ftl? Stated simply by DA, and which you (and I) agree with:
DevilsAvocado said:
Bell's(2) is not about entanglement, Bell's(2) is only about the Hidden variable .
 
  • #948
ThomasT said:
I understand the proofs of BIs. What I don't understand is why nonlocality or ftl are seriously considered in connection with BI violations and used by some to be synonymous with quantum entanglement.
Yes, you don't understand it, but mainstream physicists are in agreement that Bell's equations all follow directly from local realism plus a few minimal assumptions (like no parallel universes, no 'conspiracies' in past conditions that predetermine what choice the experimenter will make on each trial and tailor the earlier hidden variables to those future choices), so why not consider the probability that the problem likely lies with your understanding rather than that of all those physicists for decades?
ThomasT said:
The evidence supports Bell's conclusion that the form of Bell's (2) is incompatible with qm and experimental results.
And (2) would necessarily be true in all local realist theories that satisfy those few minimal assumptions. (2) is not in itself a separate assumption, it follows logically from the postulate of local realism.
ThomasT said:
But that's not evidence, and certainly not proof, that nature is nonlocal or ftl. (I think that most mainstream scientists would agree that the assumption of nonlocality or ftl is currently unwarranted.)
They would agree that it's warranted to rule out local realist theories. Do you disagree with that? Of course this doesn't force you to believe in ftl, you are free to just drop the idea of an objective universe that has a well-defined state even when we're not measuring it (which is basically the option taken by those who prefer the Copenhagen interpretation), or consider the possibility that each measurement splits the experimenter into multiple copies who see different results (many-worlds interpretation), or consider the possibility of some type of backwards causality that can create the kind of "conspiracies" I mentioned.
ThomasT said:
I think that a more reasonable hypothesis is that Bell's (2) is an incorrect model of the experimental situation.
Local realism is an incorrect model, but (2) is not a separate assumption from local realism, it would be true in any local realist theory.
ThomasT said:
Why doesn't the incompatibility of Bell's (2) with qm and experimental results imply nonlocality or ftl?
It implies the falsity of local realism, which means if you are a realist who believes in an objective universe independent of our measurements, and you don't believe in any of the "weird" options like parallel worlds or "conspiracies", your only remaining option is nonlocality/ftl.
 
Last edited:
  • #949
DrChinese said:
As far as I can see, there is currently very high detection efficiencies. From Zeilinger et al:

These can be characterized individually by measured visibilities, which were: for the source, ≈ 99% (98%) in the H/V (45°/135°) basis; for both Alice’s and Bob’s polarization analyzers, ≈ 99%; for the fibre channel and Alice’s analyzer (measured before each run), ≈ 97%, while the free-space link did not observably reduce Bob’s polarization visibility; for the effect of accidental coinci-dences resulting from an inherently low signal-to-noise ratio (SNR), ≈ 91% (including both dark counts and multipair emissions, with 55 dB two-photon attenuation and a 1.5 ns coincidence window).

Violation by 16 SD over 144 kilometers.
http://arxiv.org/abs/0811.3129
What has visibility in common with detection efficiency. :bugeye:
Visibility=(coincidence-max - coincidence-min)/(coincidence-max + coincidence-min)
Efficiency=coincidence rate/singlet rate
 
  • #950
JesseM said:
A few papers I came across suggested that experiments which closed both the detector efficiency loophole and the locality loophole simultaneously would likely be possible fairly soon. If someone offered to bet Bill a large sum of money that the results of these experiments would continue to match the predictions of QM (and thus continue to violate Bell inequalities that take into account detector efficiency), would Bill bet against them?
Interesting. And do those papers suggest at least approximately what kind of experiments they will be?
Or is it just very general idea?

Besides if you want to discuss some betting with money you are in the wrong place.
 
  • #951
nismaratwork said:
I can see why Dirac disdained this kind of pondering, which in the end has little or nothing to do with the work of physics and its applications in life.

So true. It would be nice if while debating the placement of punctuation and the definition of words in the language we speak daily, we perhaps reminded ourselves of the importance of predictions and related experiments. Because every day, there are fascinating new experiments involving new forms of entanglement. That would be the same "action at a distance" as envisioned in this thread which some think they have "disproven".

And just to prove that, just check out the following link:

As of this morning, this represented 572 articles on the subject - many theoretical but also many experimental - on entanglement and Bell.

Oh, and that would be so far in 2010. Please folks, get a grip. You don't need to take my word for it. Read about 50 or 100 of these papers, and you will see that these issues are being tackled every day by physicists who wake up thinking about this. And you will also see mixed in many interesting alternative ideas which are out of the mainstream: these articles are not all peer reviewed. Look for a Journal Reference to get those, which tend to be mainstream and higher quality overall. Many experimental results will be peer reviewed.
 
Last edited:
  • #952
zonde said:
What has visibility in common with detection efficiency. :bugeye:
Visibility=(coincidence-max - coincidence-min)/(coincidence-max + coincidence-min)
Efficiency=coincidence rate/singlet rate

They are often used differently in different contexts. The key is to ask: what pairs am I attempting to collect? Did I collect all of those pairs? Once I collect them, was I able to deliver them to the beam splitter? Of those photons going through the beam splitter, what % were detected? By analyzing carefully, the experimenter can often evaluate these questions. In state of the art Bell tests, these can be important - but not always. Each test is a little different. For example, if fair sampling is assumed then strict evaluation of visibility may not be important. But if you are testing the fair sampling assumption as part of the experiment, it would be an important factor.

Clearly, the % of cases where there is a blip at Alice's station but not Bob's (and vice versa) is a critical piece of information where fair sampling is concerned. If you subtract that from 100%, you get a number. I believe this is what is referred to as visibility by Zeilinger but honestly it is not always clear to me from the literature. Sometimes this may be called detection efficiency. At any rate, there are several distinct issues involved.

Keep in mind that for PDC pairs, the geometric angle of the collection equipment is critical. Ideally, you want to get as many entangled pairs as possible and as few unentangled as possible. If alignment is not correct, you will miss entangled pairs. You may even mix in some unentangled pairs (which will reduce your results from the theoretical max violation of a BI). There is something of a border at which getting more entangled is offset by getting too many more unentangled. So it is a balancing act.
 
  • #953
ThomasT said:
I understand the proofs of BIs. What I don't understand is why nonlocality or ftl are seriously considered in connection with BI violations and used by some to be synonymous with quantum entanglement.
In defining the argument, the assumptions and consequences had to be enumerated, regardless of how unlikely one or the other potential consequences might be from some point of view. IF something physical actually traverses that space, in the alloted time, to effect the outcomes as measured, realism is saved. Doesn't matter how reasonable or silly it might be, given the "IF" the fact follows, thus must be included in the range of potentials.

JesseM said:
Yes, you don't understand it, but mainstream physicists are in agreement that Bell's equations all follow directly from local realism plus a few minimal assumptions (like no parallel universes, no 'conspiracies' in past conditions that predetermine what choice the experimenter will make on each trial and tailor the earlier hidden variables to those future choices), so why not consider the probability that the problem likely lies with your understanding rather than that of all those physicists for decades?
This is the worst possible argument possible. Almost precisely the same argument my friend, that turned religious, used to try to convert me. It's invalid in any context, no matter how solid the claim it's used to support. I cringe no matter how trivially true such a statement is used to support. So if the majority "don't understand", as you have stated for yourself, acceptance of this argument makes the majority acceptance a self fulfilled prophesy.

JesseM said:
And (2) would necessarily be true in all local realist theories that satisfy those few minimal assumptions. (2) is not in itself a separate assumption, it follows logically from the postulate of local realism.
You call it a postulate of local realism, but fail to mention that this "postulate of local realism" is predicated on a very narrowly defined 'operational' definition, which even its originators (EPR) disavowed it, at the time it was proposed, as the only, sufficiently complete, etc., as a complete definition. It was a definition that I personally rejected at a very young age, before I ever heard of EPR or knew what QM was. Solely on classical grounds, but related to some ideas DrC used to reject Hume realism. Now such silly authority arguments, as provided above, is used to make demands that I drop "realism", because somebody generalized an 'operational' definition that I rejected in my youth, and proved it false. Am I supposed to be in awe of that?

JesseM said:
It implies the falsity of local realism, which means if you are a realist who believes in an objective universe independent of our measurements, and you don't believe in any of the "weird" options like parallel worlds or "conspiracies", your only remaining option is nonlocality/ftl.
Unequivocally false. There are other options, unless you want to insist that one 'operational' definition is by academic definition the only available definition of realism available. Even then it doesn't make you right, you have only chosen a definition to insure you can't be wrong.

There are whole ranges of issues involved. Many of which may have some philosophical content that don't strictly belong in science, unless of course you can formalize it into something useful. Yet the "realism" claim associated with Bell is a philosophical claim, by taking a formalism geared toward a single 'operational' definition and expanding it over the entire philosophical domain of realism. It's a massive composition fallacy.

The composition fallacy runs even deeper. There's the assumption that the things we measure are existential in the sense of things. Even if every possible measurable we are capable of is provably no more real than a coordinate choice, it is NOT proof that things don't exist independent of being measured (the core assumption of realism). Or that a theoretical construct can't build an empirically consistent emergent system based on existential things. Empirical completeness and the completeness of nature is not synonymous. Fundamentally, realism is predicated on measurement independence, and cannot be proved false on the grounds that an act of measurement has effects. If it didn't have effects, measurements would be magical. Likewise, in a strict realist sense, an existential thing, independent variable, which has independent measurable properties is also a claim of magic.

So please, at least qualify local realism with "Einstein realism", "Bell realism", or some other suitable qualifier, so as not to make the absurd excursion into a blanket philosophical claim that the entire range of all forms of "realism" are provably falsified. It turns science into a philosophical joke, whether right or wrong. If this argument is overly philosophical, sorry, that what the blanket claim that BI violations falsifies local realism imposes.
 
Last edited:
  • #954
my_wan said:
It turns science into a philosophical joke, whether right or wrong. If this argument is overly philosophical, sorry, that what the blanket claim that BI violations falsifies local realism imposes.

Does it help if we say that BI violations blanket falsify claims of EPR (or Bell) locality and EPR (or Bell) realism? Because if words are to have meaning at all, this is the case.
 
  • #955
DrChinese said:
So true. It would be nice if while debating the placement of punctuation and the definition of words in the language we speak daily, we perhaps reminded ourselves of the importance of predictions and related experiments. Because every day, there are fascinating new experiments involving new forms of entanglement. That would be the same "action at a distance" as envisioned in this thread which some think they have "disproven".

And just to prove that, just check out the following link:

As of this morning, this represented 572 articles on the subject - many theoretical but also many experimental - on entanglement and Bell.

Oh, and that would be so far in 2010. Please folks, get a grip. You don't need to take my word for it. Read about 50 or 100 of these papers, and you will see that these issues are being tackled every day by physicists who wake up thinking about this. And you will also see mixed in many interesting alternative ideas which are out of the mainstream: these articles are not all peer reviewed. Look for a Journal Reference to get those, which tend to be mainstream and higher quality overall. Many experimental results will be peer reviewed.

I like this approach very much. We should never forget the need for what works, and how it works in the midst of WHY it works.
 
  • #956
my_wan said:
This is the worst possible argument possible. Almost precisely the same argument my friend, that turned religious, used to try to convert me. It's invalid in any context, no matter how solid the claim it's used to support. I cringe no matter how trivially true such a statement is used to support. So if the majority "don't understand", as you have stated for yourself, acceptance of this argument makes the majority acceptance a self fulfilled prophesy.
Huh? I said it was ThomasT who didn't understand Bell's proof, not the majority of physicists. And in technical subjects like science and math, I think it's perfectly valid to say that if some layman doesn't understand the issues very well but is confused about the justification for some statement that virtually all experts endorse, the default position of a layman showing intellectual humility should be that it's more likely the mistake lies with his/her own understanding, rather than taking it as a default that they've probably found a fatal flaw that all the experts have overlooked and proceeding to try to convince others of that. Of course this is just a sociological statement about likelihood that a given layman has actually discovered something groundbreaking, I'm not trying to argue that anyone should take the mainstream position on faith or not bother asking questions about the justification for this position. But if you don't take this advice there's a good chance you'll fall victim to the Dunning-Kruger effect, and perhaps also become the type of "bad theoretical physicist" described by Gerard 't Hooft here.
my_wan said:
You call it a postulate of local realism, but fail to mention that this "postulate of local realism" is predicated on a very narrowly defined 'operational' definition, which even its originators (EPR) disavowed it, at the time it was proposed, as the only, sufficiently complete, etc., as a complete definition.
I define "local realism" to mean that facts about the complete physical state of any region of spacetime can be broken down into a sum of local facts about the state of individual points in spacetime in that region (like the electromagnetic field vector at each point in classical electromagnetism), and that each point can only be causally influenced by other points in its past light cone. Do you think that this is too "narrowly defined" or that EPR would have adopted a broader definition where the above wasn't necessarily true? (if so, can you provide a relevant quote from them?) Or alternatively, do you think that Bell's derivation of the Bell inequalities requires a narrower definition than the one I've just given?
my_wan said:
Unequivocally false. There are other options, unless you want to insist that one 'operational' definition is by academic definition the only available definition of realism available. Even then it doesn't make you right, you have only chosen a definition to insure you can't be wrong.
I don't know what you mean by "operational", my definition doesn't appear to be an operational one but rather an objective description of the way the laws of physics might work. If you do think my definition is too narrow and that there are other options, could you give some details on what a broader definition would look like?
my_wan said:
There are whole ranges of issues involved. Many of which may have some philosophical content that don't strictly belong in science, unless of course you can formalize it into something useful. Yet the "realism" claim associated with Bell is a philosophical claim, by taking a formalism geared toward a single 'operational' definition and expanding it over the entire philosophical domain of realism. It's a massive composition fallacy.
In a scientific/mathematical field it's only meaningful to use terms like "local realism" if you give them some technical definition which may be different than their colloquial meaning or their meaning in nonscientific fields like philosophy. So if a physicist makes a claim about "local realism" being ruled out, it doesn't really make sense to say the claim is a "fallacy" on the basis of the fact that her technical definition doesn't match how you would interpret the meaning of that phrase colloquially or philosophically or whatever. That'd be a bit like saying "it's wrong to define momentum as mass times velocity, since that definition doesn't work for accepted colloquial phrases like 'we need to get some momentum going on this project if we want to finish it by the deadline'".
my_wan said:
The composition fallacy runs even deeper. There's the assumption that the things we measure are existential in the sense of things.
Not sure what you mean. Certainly there's no need to assume, for example, that when you measure different particle's "spins" by seeing which way they are deflected in a Stern-Gerlach device, you are simply measuring a pre-existing property which each particle has before measurement (so each particle was already either spin-up or spin-down on the axis you measure).
my_wan said:
Even if every possible measurable we are capable of is provably no more real than a coordinate choice
Don't know what you mean by that either. Any local physical fact can be defined in a way that doesn't depend on a choice of coordinate system, no?
my_wan said:
it is NOT proof that things don't exist independent of being measured (the core assumption of realism).
Since I don't know what it would mean for "every possible measurable we are capable of is provably no more real than a coordinate choice" to be true, I also don't know why the truth of this statement would be taken as "proof that things don't exist independent of being measured". Are you claiming that any actual physicists argue along these lines? If so, can you give a reference or link?
my_wan said:
Fundamentally, realism is predicated on measurement independence, and cannot be proved false on the grounds that an act of measurement has effects.
I don't see why, nothing about my definition rules out the possibility that the act of measurement might always change the system being measured.
my_wan said:
So please, at least qualify local realism with "Einstein realism", "Bell realism", or some other suitable qualifier, so as not to make the absurd excursion into a blanket philosophical claim that the entire range of all forms of "realism" are provably falsified.
All forms compatible with my definition of local realism are incompatible with QM. I don't know if you would have a broader definition of "local realism" than mine, but regardless, see my point about the basic independence of the technical meaning of terms and their colloquial meaning.
 
Last edited:
  • #957
zonde said:
Interesting. And do those papers suggest at least approximately what kind of experiments they will be?
Or is it just very general idea?
See for example this paper and this one...the discussion seems fairly specific.
zonde said:
Besides if you want to discuss some betting with money you are in the wrong place.
I was just trying to get a sense of whether Bill actually believed himself it was likely that all the confirmation of QM predictions in these experiments would turn out to be a consequence of a local realist theory that was "exploiting" both the detector efficiency loophole and the locality loophole simultaneously, or if he was just scoffing at the fact that experiments haven't closed both loopholes simultaneously for rhetorical purposes (of course there's nothing wrong with pointing out the lack of loophole-free experiments in this sort of discussion, but Bill's triumphant/mocking tone when pointing this out would seem a bit hollow if he didn't actually think such a loophole-exploiting local theory was likely).
 
  • #958
DrChinese said:
Does it help if we say that BI violations blanket falsify claims of EPR (or Bell) locality and EPR (or Bell) realism? Because if words are to have meaning at all, this is the case.
That is in fact the case. BI violations do in fact rule out the very form of realism it was predicated on. "EPR local realism" would be fine, as that contains the source of the operational definition Bell did in fact falsify. Some authors already do this, like Adan Cabello spelled it out as "Einstein-Podolsky-Rosen element of reality" (Phys. Rev. A 67, 032107 (2003)). Perfectly acceptable.

As an aside, I really doubt that any given individual element of "physical" reality, assuming such exist and realism holds, corresponds to any physically measurable quantity. This does not a priori preclude a theoretical construct from successfully formalizes such elements. Note how diametrically opposed this is to the operational definition used by EPR:
“If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity

The original EPR paper gave a more general definition of realism that wasn't so contingent on the operational definition:
"every element of physical reality must have a counterpart in physical theory."
Though this almost certainly implicitly assumed some correspondence I consider more than a little suspect, it doesn't a priori assume an element of physical reality has a direct correspondence with observables. Neither do I consider an empirically complete theory incomplete on the grounds that unobservables may be presumed to exist yet not be defined by the theory. That also opposes Einstein's realism. However, certain issues, such as the vacuum catastrophe, GR + QM, dark matter/energy, etc., is a fairly good if presumptuous indicator of incompleteness.

These presumptions, which opposes the EPR definition, began well before I was a teenager or had any clue about EPR or QM, and was predicated on hard realism. Thus I can't make specific claims, I can only state that blanket philosophical claims of what BI violations falsifies is a highly unwarranted composition fallacy, not that the claim is ultimately false.
 
  • #959
JesseM said:
I was just trying to get a sense of whether Bill actually believed himself it was likely that all the confirmation of QM predictions in these experiments would turn out to be a consequence of a local realist theory that was "exploiting" both the detector efficiency loophole and the locality loophole simultaneously, or if he was just scoffing at the fact that experiments haven't closed both loopholes simultaneously for rhetorical purposes
And as I explained, I do not engage in these discussions for religious purposes, so I'm surprised why you would expect me to bet on. A claim has been made about the non-locality of the universe. I and others, have raised questions about the premises used to supporting that claim. Rather than explain why the premises are true, you expect me rather to bet that the claim is not true. In fact the suggestion is itself maybe suggestive of your approach to these discussions, which I do not consider to be about winning or losing an argument but about understanding the truth of the issues infront of us.

The fact that QM and experiments agree is a big hint that the odd-man out (Bell inequalities) does not model the same thing as QM does, which is what is realized in real experiments. There is no question about this. I think you agree with this. So I'm not sure why you think by repeatedly mentioning the fact that numerous experiments have agreed with QM, it somehow advances your argument. It doesn't. Also the phrase "experimental loopholes" is a misnomer because it gives the false impression that there is something "wrong" with the experiments, such that "better" experiments have to be performed. This is a backward look at it. Every so-called "loophole" is actually a hidden assumption made by Bell in deriving his inequalities.

When I mentioned "assumption" previously, you seemed to express surprise, despite the fact that I have already pointed out to you several times hidden assumptions within Bell's treatment that make it incompatible with Aspect-type experiments. If anyone or more of any assumptions in Bell's treatment are not met in the experiments, Bell's inequalities will not apply. The locality assumption is explicit in Bell's treatment, so Bell's proponents think violation of the inequalities definitely means violation of the locality principle. But there are other hidden assumptions such as:

1) Every photon pair will be detected (due to choice of only +/- as possible outcomes)
2) P(lambda) is equivalent for each of the terms of the inequality
3) Datasets of pairs are extracted from a dataset of triples
4) Non-contextuality
5) ...

And the others I have not mentioned or are yet to be discovered. So whenever you hear about "detection efficiency loophole", the issue really is a failure of hidden assumption (1). And the other example I just gave a few posts back about cyclicity and indexing, involves the failure of (2) and (3).

It is therefore not surprising that some groups have reported on locally causal explanations of many of these Bell-test experiments, again confirming that the problem is in the hidden assumptions used by Bell, not in the experimenters.

(of course there's nothing wrong with pointing out the lack of loophole-free experiments in this sort of discussion, but Bill's triumphant/mocking tone when pointing this out would seem a bit hollow if he didn't actually think such a loophole-exploiting local theory was likely).
I make an effort to explain my point of view, you are free to completely demolish it with legitimate arguments. I will continue to point out the flaws I see in your responses (as long as a relevant response can be descerned from them), and if your arguments are legitimate, I will change my point of view accordingly. But if you can not provide a legimate argument and you think the goal of discussion as one of winning/losing, you may be inclined to interprete my conviction about my point of view to be "triumphant/mocking". But that is just your perspective and you are entitled to it, even if it is false.
 
  • #960
billschnieder said:
It is therefore not surprising that some groups have reported on locally causal explanations of many of these Bell-test experiments, again confirming that the problem is in the hidden assumptions used by Bell, not in the experimenters.

When you say "explanations", I wonder exactly what qualifies as an explanation. The only local realistic model I am aware of is the De Raedt et al model, which is a computer simulation which satisfies Bell. All other local explanations I have seen are not realistic or have been generally refuted (e.g. Christian, etc). And again, by realistic, I mean per the Bell definition (simultaneous elements of reality, settings a, b and c).
 

Similar threads

  • · Replies 45 ·
2
Replies
45
Views
4K
  • · Replies 4 ·
Replies
4
Views
1K
Replies
20
Views
2K
Replies
3
Views
2K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 100 ·
4
Replies
100
Views
11K
  • · Replies 6 ·
Replies
6
Views
3K
Replies
11
Views
2K