Scholarpedia article on Bell's Theorem

Click For Summary
The discussion centers on a newly published review article on Bell's Theorem by Goldstein, Tausk, Zanghi, and the author, which aims to clarify ongoing debates surrounding the theorem. The article is presented as a comprehensive resource for understanding Bell's Theorem, addressing various contentious issues. However, some participants express disappointment, noting that it lacks references to significant critiques of non-locality and fails to mention historical connections to Boole's inequalities. The conversation highlights differing interpretations of terms like "non-locality" and "realism," with some advocating for a more nuanced understanding. Overall, the article is seen as a valuable contribution, yet it also invites scrutiny and further discussion on its claims and omissions.
  • #481
DevilsAvocado said:
... Maybe I was wooly, but that is exactly what I meant; determinism + no free will = superdeterminism.

I don't like dragging free will into discussions about science, because I just don't think it has any relevance. "Free" choices that humans make could very well be determined by conditions at a microscopic level, and that wouldn't make much difference, in practice. What I thought was the difference between determinism and superdeterminism is this:

  • A theory is deterministic if a past state uniquely singles out one possible future state.
  • A theory is superdeterministic if there is only one possible past, as well.

Newtonian physics is deterministic, but not superdeterministic. It gives the future positions and velocities of particles in terms of past positions and velocities, but the initial positions and velocities are arbitrary. So there are many possible pasts, but for each possible past, there is exactly one possible future.

A superdeterministic theory would have constraints that single out only one possible initial condition, as well as only one possible future. I can easily imagine what a superdeterministic theory might look like.

For example, suppose that instead of the usual theory that takes an initial state at one point in time and evolves it toward a state at a later point in time, we have a theory that evolves the entire history of the universe in terms of an unobservable "hypertime" parameter. So you start with complete history of the universe, h_0, where h_0(t) gives the state of the universe at time t. Then you solve the initial-value equations:

H(t,0) = h_0(t)
\dfrac{d}{ds} H(t,s) = G(H(t,s), \frac{\partial}{\partial t} H(t,s))

Then the "actual" history of the universe is some kind of limit:

h(t) = lim\ s \rightarrow \infty\ :\ H(t,s)

It could very well be that such a limit might be independent of the initial choice of h_0(t).

I had the idea that Stephen Hawking worked on an idea like this for the "wave function of the universe". The details of both future and past were fixed by requirements of self-consistency.
 
Physics news on Phys.org
  • #482
stevendaryl said:
So the theory of local beables says that if an observation at E allows you to predict something about the results of an observation at D, then the same prediction in principle could have been made using only facts from region C.

But the problem with that theory is that if you have the results of the measurement you have already done at A and B with settings (a, b), those results must influence your prediction for what you would have obtained if you were measuring at the angles (a', b') instead (counterfactually).

Traditionally, we have always analyzed the situation as, the result at Alice tells us something about the result at Bob. However, the way we should analyze it is that, the correlation between Alice and Bob at angle pair (a,b), tells us something about what correlation would be obtained had Alice and Bob measured at angle pair (a, b'), or (a', b) instead.

But looking at how Bell's theorem is derived, the same values are used for the correlation for each term without any interdependence requirement. This is a mathematical error. This is related to the measurable sets issue because the terms in the CHSH being counterfactual, are not all simultaneously measurable. So no experiment can ever be done to verify it.
 
  • #483
billschnieder said:
But the problem with that theory is that if you have the results of the measurement you have already done at A and B with settings (a, b), those results must influence your prediction for what you would have obtained if you were measuring at the angles (a', b') instead (counterfactually).

Traditionally, we have always analyzed the situation as, the result at Alice tells us something about the result at Bob. However, the way we should analyze it is that, the correlation between Alice and Bob at angle pair (a,b), tells us something about what correlation would be obtained had Alice and Bob measured at angle pair (a, b'), or (a', b) instead.

But looking at how Bell's theorem is derived, the same values are used for the correlation for each term without any interdependence requirement. This is a mathematical error. This is related to the measurable sets issue because the terms in the CHSH being counterfactual, are not all simultaneously measurable. So no experiment can ever be done to verify it.

I'm not exactly sure what you're saying. Are you saying that a local realistic theory can reproduce the predictions of the EPR?

Anyway, Bell's "local beables" are NOT measurements. There is no assumption that any measurements were performed in regions A or B. We're only talking about measurements performed at D and E.
 
  • #484
billschnieder said:
But only two of them can be measured at the same time as your diagram shows. It still does not change the fact that we have outcomes at 4 angles. For each angle there is an outcome. Wasn't this supposed to be the "Realism" assumption.

Well, there is not “an outcome” for every angle (obviously), but IF you are a proponent of LHV then you must obviously be able to handle any [relative] angle between 0-360° since you are completely ‘helpless’ once at the measuring apparatus, facing the actual random settings. This is a problem for supporters of Local Realism, not for Bell (And how could it be? Bell has proven LHV wrong? We can’t possibly expect Bell to provide a working LHV theory, could we??).

Let's show that the result for E(0°, 67.5°) depends on the result for E(0°, 22.5°). Let us consider only the first two photons emitted and then write down their fate for the possible angles in the above expression. We'll use +1 or -1 for the result at each polarizer.

Say for the (0°, 22.5°) experiment we have (+1, -1) for 0° and 22.5° respectively. We then must necessarily have obtained (+1, ?) for (0°, 67.5°) , and (?, -1) for (45°, 22.5°) (where ? represents either +1 or -1). Surely it is common sense to see that if we measured the first photon pair at (0°, 22.5°) and got (+1, -1) , we would have gotten (+1, ?) had we measured at (0°, 67.5°) instead, and we definitely would have gotten (?, -1) had we measured at (45°, 22.5°) instead (Remember CFD). We could repeat this for every pair of photons and it doesn't take rocket science to appreciate the fact that knowing the results of one experiment affects how we calculate the others, therefore the results for the different correlations in the CHSH are interdependent.

I’m sorry Bill this is where you get it wrong. Every individual outcome at A & B is always 100% random, it does not make any sense imagine static/predetermined results in QM – it’s always a 50/50 chance for getting +1/-1. The only exception is perfect correlations, i.e. at aligned angles, if you first measure A to be +1, then you will know the outcome of B. But then you run into relativistic difficulties in deciding who makes the measurement first (depending on the observer).

You can think of it like this: IF it was possible to ‘control’ the outcome of an EPR-Bell experiment, we could use it to send FTL information! And I hope this not what you are claiming, right...?

Your reasoning about “we then must necessarily have obtained” is a counterfactual definiteness argument, and as we all know Bell’s theorem shows that Realism(CFD) and/or Locality has to go to be compatible with QM. So let’ say we preserve CFD and sacrifice Locality. What happens then? Well, this would mean non-local hidden variables (as in dBB), and ‘something’ is then checking the apparatus settings before the particles leave the source, and this will bring you back to square one, you can’t possible know the outcome if the settings was different.

Trust me.


[To Steven & Co., I have to go now, no time, hopefully back tomorrow]
 
  • #485
stevendaryl said:
I should point out that there is a guy, Joy Christian, who claims that Bell's theorem is mistaken, and that it implicitly assumes something about the topology of space, but I have been unable to make any sense of his claims.

His claims do not make sense. He has some strange construction which gives some E(a,b) but has not understood that these E(a,b) are only computed from probabilities p(A,B|a,b), which are observables, and the A, B simply observed values, or +1 or -1.

See my discussion with him at http://fqxi.org/community/forum/topic/812

Ilja said:
I have asked you a quite simple question. Again, even simpler: Have you provided, in one of your papers, a local model which predicts probabilities p(A,B|a,b) so that the corresponding expectation values E(a,b)=sum AB p(A,B|a,b) violate Bell's inequalities?

If yes, tell me the paper and the pages. If not, that's my point.

In this case, explain where is the error in my argument. I think the probabilities p(A,B|a,b) are predicted by QM and corresponding frequencies are measured in experiments, in a quite open way, without anything hidden. So any realistic model has to recover them.

What's your problem with explaining me the simple error in such a simple argument? Of course, I have no eyes to see my own errors, else I would not make them. That's a tautology. So, please help a poor soul, who is unable to understand your deep thoughts, to see his trivial error.

Else, I wish you succes with publishing your model in a good journal - it would be a nice possibility for me to receive some explanation of my error by peer review, or to publish a rebuttal there.

Christian said:
In almost all of my papers you will find a local-realistic model that exactly reproduces all of the predictions of quantum mechanics for the singlet state. And by all I mean all. I have no interest in educating you otherwise.
 
  • #486
billschnieder said:
The error then is to assume that the following two correlations below are the same:

(1) C(b,c) = QM correlation for what we would get if we measure (b,c)
(2) C(b,c) = QM correlation for what we would have gotten had we measures (b,c) instead of the (a,b) we measured.

They are not.
Here is what I said to you in an old thread:
lugita15 said:
I make the crucial assumption, which I expect that you disagree with or think is misleading, that the following two probabilities are always equal:
1. The probability that this photon would go through a polarizer if it is oriented at angle x, given that the polarizer is actually oriented at angle x.
2. The probability that this photon would go through a polarizer if it is oriented at angle x, given that the polarizer is NOT actually oriented at angle x, but instead some different angle y.

Now why do I assume that these two probabilities are equal? Because I am assuming that the answers to the following two questions are always the same:
1. What result would you get if you send this photon through a polarizer oriented at angle x, given that the polarizer is actually oriented at angle x?
2. What result would you get if you send the photon through a polarizer oriented at angle x, given that the polarizer is NOT oriented at angle x, but instead some different angle y?

And why do I assume that these questions have the same answer? That seems to me to be a consequence the no-conspiracy condition: the properties (or answers to questions) that are predetermined cannot depend on the specific measurement decisions that are going to be made later, since by assumption those decisions are free and independent.
 
  • #487
stevendaryl said:
I'm not exactly sure what you're saying. Are you saying that a local realistic theory can reproduce the predictions of the EPR?
I'll leave that up to Joy Christian and the others to figure out. I'm not saying that at all. Rather I'm saying violation of Bell's and CHSH inequalities by QM and Experiments tells us nothing, because the use of correlations from QM and from experiments to compare to the inequalities is deeply flawed. In addition, I'm saying it is impossible to violate the inequalities if you are reasoning correctly even if you assume that non-local causation or any other spooky stuff is happening. The inequalities are mathematical tautologies which apply to all theories, local or non-local!

Anyway, Bell's "local beables" are NOT measurements. There is no assumption that any measurements were performed in regions A or B. We're only talking about measurements performed at D and E.

I meant D and E. Your non-standard choice of notation played one on me. But the argument does not change, just substitute D, and E. I should have said:

But the problem with that theory is that if you have the results of the measurement you have already done at D and E with settings (a, b), those results must influence your prediction for what you would have obtained if you were measuring at the angles (a', b') instead (counterfactually).

Traditionally, we have always analyzed the situation as, the result at Alice tells us something about the result at Bob. However, the way we should analyze it is that, the correlation between Alice and Bob at angle pair (a,b), tells us something about what correlation would be obtained had Alice and Bob measured at angle pair (a, b'), or (a', b) instead.

But looking at how Bell's theorem is derived, the same values are used for the correlation for each term without any interdependence requirement. This is a mathematical error. This is related to the measurable sets issue because the terms in the CHSH being counterfactual, are not all simultaneously measurable. So no experiment can ever be done to verify it.​
 
  • #488
billschnieder said:
I'll leave that up to Joy Christian and the others to figure out. I'm not saying that at all. Rather I'm saying violation of Bell's and CHSH inequalities by QM and Experiments tells us nothing, because the use of correlations from QM and from experiments to compare to the inequalities is deeply flawed. In addition, I'm saying it is impossible to violate the inequalities if you are reasoning correctly even if you assume that non-local causation or any other spooky stuff is happening. The inequalities are mathematical tautologies which apply to all theories, local or non-local!


Now I'm completely confused by what you're saying. There is no problem in violating Bell's inequalities if we allow nonlocal causation.
 
  • #489
stevendaryl said:
Now I'm completely confused by what you're saying. There is no problem in violating Bell's inequalities if we allow nonlocal causation.
Are you sure? If you insist, I suppose you can provide a NON-LOCAL dataset of outcomes for three angles a, b, c which violates the inequalities. Using your assumption of non-locality, please generate such a dataset in the form:

a, b, c
-----------
-1, +1, -1
+1, -1, -1
+1, +1, -1
-1, -1, -1
-1, -1, -1
-1, +1, +1
...

For any number of photons you like. Then we will calculate the correlations from it and verify if it violates Bell's inequalities as you claim. Note you can use any assumption you like in generating the outcomes, specifically, please use non-locality and spooky action at a distance. The only condition is there are 3 outcomes for 3 angles for each photon measured.
 
  • #490
DevilsAvocado said:
Well, there is not “an outcome” for every angle (obviously),
Are you serious? You must have misunderstood something very fundamental about the EPR experiment. For each photon that leaves the source and heads towards Alice, you have a single outcome, (+, -, or non-detection). Alice's polarizer is set to a specific angle say 67.5° for that photon. Same thing for Bob. It is only when Alice has collected all her pluses and minuses and Bob has done the same that they start comparing time-tags to see coincidences and then they can figure out what the angular difference was at the moment of detection! The photon gives no rodent's behind whether Alice's angle was chosen randomly or not! All it meets is a polarizer at a given angle resulting in an OUTCOME of (+, -, or non-detection).

This is a problem for supporters of Local Realism, not for Bell (And how could it be? Bell has proven LHV wrong? We can’t possibly expect Bell to provide a working LHV theory, could we??).
Who talked about expecting Bell to provide a working LHV theory. He assumed LHV from the start, you can't then throw LHV out and then complain that LHV must be wrong because it is not obeyed by what you have left. It is simply called sound reasoning.

I’m sorry Bill this is where you get it wrong. Every individual outcome at A & B is always 100% random
So? Who said any thing about the stream of outcomes appearing at Alice or Bob appearing other than random. That does not change the fact that there is an outcome. I have the files from Weihs' experiment and there is one outcome for each photon detected.

it does not make any sense imagine static/predetermined results in QM – it’s always a 50/50 chance for getting +1/-1.
I'm afraid you have seriously misunderstood. Nobody is assuming static/predetermined results. The result is random for a given photon, but once Alice has measured and obtained +1 for that photon at 67.5°, it is a mathematical/logical error to say Alice would have obtained -1 had she measure that specific photon at the same specific angle 67.5°! You can't set a realism assumption and them immediately gut it and expect it to stay put.

Continuing in next post ...
 
  • #491
Your reasoning about “we then must necessarily have obtained” is a counterfactual definiteness argument, and as we all know Bell’s theorem shows that Realism(CFD) and/or Locality has to go to be compatible with QM.
Huh? I just show you that it is impossible to derive Bell's theorem without using CFD. See post #473.

So let’ say we preserve CFD and sacrifice Locality. What happens then?
We do not need to sacrifice anything, because there is nothing there there to start with. The terms in Bell's inequality and the CHSH can never be tested experimentally, if reasoning correctly. The inequalities can never be violated if reasoning correctly. So I guess what has to be sacrificed is buffoonery.

Well, this would mean non-local hidden variables (as in dBB), and ‘something’ is then checking the apparatus settings before the particles leave the source, and this will bring you back to square one, you can’t possible know the outcome if the settings was different.

Let me say it one more time in case you missed it the last time. The inequality (Bell's original) is: |C(a,b)−C(a,c)| <= 1+C(b,c). We have three terms here C(a,b), C(a,c), C(b,c). Those terms can never be all factual as far as the EPR experiment is concerned. At least two of them MUST be counterfactual! There is no other way. Thinking otherwise is just buffoonery. The inequalities can NOT be derived UNLESS the other two terms are counterfactual. As soon as you see that, you realize immediately that NO EXPERIMENT can ever measure them all! None! You can measure one but not the other two. Is that clear enough? So then, we are left with a lot of experimentalists who do not know what they are doing, publishing in lofty journals whose editors and reviewers do not know what they are doing, a many who love mysticism regurgitating what they've read without thinking for themselves. No news here.

Continuing below ...
 
  • #492
If you doubt the above, we can go through Bell's derivation exactly as he did it and confirm that those terms are indeed counterfactual. Having eliminated all the experiments, we now have QM left. How come then that QM can violate the inequalities? Because the terms that people calculate from QM and substitute into the inequalities in order to obtain violation, are not the correct terms.

They've calculated and used the following three terms (scenario X):
C(a,b) = QM correlation for what we would get if we measure (a,b)
C(b,c) = QM correlation for what we would get if we measure (b,c)
C(a,c) = QM correlation for what we would get if we measure (a,c)

When Bell's inequalities DEMAND that the correlations should be (scenario Y):
C(a,b) = QM correlation for what we would get if we measure (a,b)
C(a,c) = QM correlation for what we would have gotten had we measured (a,c) instead of (a,b)
C(b,c) = QM correlation for what we would have gotten had we measured (b,c) instead of (a,b)

They naively think the terms above must be the same as the terms below. They even go a step further to call that the no conspiracy condition. But Let me show how naive that is.

Consider a pair of photons heading toward Alice and Bob resp, with polarizesr set to the angles (a,b). Let us say the possible outcomes are (+, -, 0 for nondetection) for each side and they are all equaly likely. What is the probability of the outcome being + at Alice?

P(+|a) = 1/3

Now what is the probability that we would have obtained + at Alice if Alice and Bob had measured at angles (a,c) instead of (a,b). Note this is counterfactual. If you answer 1/3 you need to learn some probability theory. The correct answer is 1, we already know that measuring the photon at angle a gives +, where is the conspiracy in that?! Knowing what was obtained in the factual experiment, changes the probability we calculate for the counterfactual situation, nothing spooky involved. Now we can carry this all the way and include coincidences and you will see that using scenario X correlations in Bell's inequality is deeply flawed.

Then, what about the agreement between experiment and QM? Because scenario X is actually what is measured, since scenario Y is impossible to measure in experiments.To conclude, QM gives the correct answer for the experiments performed, but neither QM nor the experiments can provide the right answers for substitution in the inequalities.
 
  • #493
stevendaryl said:
Now I'm completely confused by what you're saying. There is no problem in violating Bell's inequalities if we allow nonlocal causation.

Here's a related argument (not a proof, because I'm leaving out the key mathematical step), which I think is mathematically simpler than Bell's proof, and it allows us to see exactly where the assumption of locality comes in.

Lets suppose that Alice and Bob have detectors oriented in the x-y plane, so that the orientation can be characterized by an angle.

We randomly produce a twin-pair of spin-1/2 particles, and Alice and Bob both measure the spin of one of the two particles. Let P(\alpha, \beta) be the probability that Alice measures spin-up at angle \alpha and Bob measures spin-up at angle \beta. The prediction of quantum mechanics are:

P(\alpha, \beta) = \frac{1}{2} sin^2(\frac{1}{2} (\beta - \alpha))

Now, if we assume that this prediction can be "explained" in terms of a local variable \lambda, then we can write this in the following form:

P(\alpha, \beta) = \int P_L(\lambda) P_A(\alpha, \lambda) P_B(\beta, \lambda) d\lambda

The idea behind writing in this form is that we assume that when the twin pair is created, a random value of \lambda is chosen with probability distribution P_L(\lambda). Then each particle carries this value of \lambda to the detector. Then the probability of Alice measuring spin-up depends on \lambda, and also depends on her detector orientation \alpha. So P_A(\alpha, \lambda) is the probability that Alice will measure spin-up given that the particle has value \lambda and her detector has orientation \alpha. Similarly P_B(\beta, \lambda) gives the probability of Bob measuring spin-up.

The fact is that there are no functions P_L(\lambda), P_A(\alpha, \lambda) and P_B(\beta, \lambda) such that

\int P_L(\lambda) P_A(\alpha, \lambda) P_B(\beta, \lambda) d\lambda = \frac{1}{2} sin^2(\frac{1}{2} (\beta - \alpha))

So you can't reproduce the joint probabilities of quantum mechanics by "factored" probabilities of this type. Now, if you allow instantaneous causal effects, there is no problem coming up with a model that reproduces the quantum prediction: Assume that the probability of measuring spin-up is initially \frac{1}{2} for any direction, and then if one experimenter (Alice or Bob) measures spin first, then the probability distribution for the other experimenter instantaneously changes to sin^2(\frac{1}{2} (\beta - \alpha). That's the "wave function collapse" interpretation, and it of course agrees with quantum mechanics.

So where the assumption of locality comes in is the assumption that the probability P(\alpha, \beta) factors into a form \int P_L(\lambda) P_A(\alpha, \lambda) P_B(\beta, \lambda) d\lambda. If there are faster-than-light causal influences, there is no reason to believe that it factors this way.
 
  • #494
stevendaryl said:
Here's a related argument (not a proof, because I'm leaving out the key mathematical step), which I think is mathematically simpler than Bell's proof,

I think you are missing the point. What I'm asking you is even simpler and more transparent. It is DrC's challenge for non-locality. In case you are in doubt about what I mean, I'm referring to the difference between

a) A non-local system will generate a dataset which violates Bell's inequality and
b) It is impossible to find a dataset which violates the ineqality (local or non-local).

I'm claiming b) and You are claiming a). So I ask that you provide the dataset. Talking about separability etc just confirms my point not yours.
 
Last edited:
  • #496
billschnieder said:
Are you sure? If you insist, I suppose you can provide a NON-LOCAL dataset of outcomes for three angles a, b, c which violates the inequalities. Using your assumption of non-locality, please generate such a dataset in the form:

a, b, c
-----------
-1, +1, -1
+1, -1, -1
+1, +1, -1
-1, -1, -1
-1, -1, -1
-1, +1, +1
...

For any number of photons you like. Then we will calculate the correlations from it and verify if it violates Bell's inequalities as you claim. Note you can use any assumption you like in generating the outcomes, specifically, please use non-locality and spooky action at a distance. The only condition is there are 3 outcomes for 3 angles for each photon measured.

The whole point is that such a chart has no relevance on the EPR experiment unless we assume that there are no causal influences that travel faster than light. You have Alice choosing one of three different detector orientations, and you similarly have Bob choosing one of three different detector orientations. If you assume that Alice's result does not depend on Bob's detector orientation, and vice-versa, then that means (in a classical, realistic theory) that for run number n of the experiment, there should be probabilities

P_{n,a}, P_{n,b}, P_{n,c}, P&#039;_{n,a}, P&#039;_{n,b}, P&#039;_{n,c}

where P_{n,a} is the probability, in run number n, of Alice detecting a photon at orientation a, where P&#039;_{n,a} is the probability, in run number n, of Bob detecting a photon at orientation a, etc. You can show that there is no probability distribution on 6-tuples of real numbers that gives the quantum predictions.

But if we allow for faster-than-light causal effects, then there is no reason to assume that such 6-tuples exist. Instead, we would have an 18-tuple:

P_{n,a,a}, P_{n,b,a}, P_{n,c,a}, <br /> P_{n,a,b}, P_{n,b,b}, P_{n,c,b},<br /> P_{n,a,c}, P_{n,b,c}, P_{n,c,c},<br /> P&#039;_{n,a,a}, P&#039;_{n,b,a}, P&#039;_{n,c,a}, <br /> P&#039;_{n,a,b}, P&#039;_{n,b,b}, P&#039;_{n,c,b},<br /> P&#039;_{n,a,c}, P&#039;_{n,b,c}, P&#039;_{n,c,c}

where P_{n,x,y} is the probability, on run n, that Alice detects a photon at angle x given that Bob's detector is oriented at angle y, and where P&#039;_{n,x,y} is the probability, on run n, that Bodetects a photon at angle y given that Alice's detector is oriented at angle x.

There is absolutely no problem in coming up with such 18-tuples that satisfy the predictions of quantum mechanics.

The assumption of classical locality is that you can get away with just 6-tuples instead of 18-tuples.
 
  • #497
billschnieder said:
I think you are missing the point. What I'm asking you is even simpler and more transparent. It is DrC's challenge for non-locality. In case you are in doubt about what I mean, I'm referring to the difference between

a) A non-local system will generate a dataset which violates Bell's inequality and
b) It is impossible to find a dataset which violates the ineqality (local or non-local).

I'm claiming b) and You are claiming a). So I ask that you provide the dataset. Talking about separability etc just confirms my point not yours.

No, I'm claiming that a non-local hidden variables theory can reproduce all the predictions of quantum mechanics, but no local hidden variables theory can. That was what Bell proved.

I agree that it is impossible to come up with such a dataset, but the existence or nonexistence of such a dataset has no relevance if you don't assume local hidden variables. You don't need such a dataset to reproduce the predictions of quantum mechanics.
 
  • #498
stevendaryl said:
No, I'm claiming that a non-local hidden variables theory can reproduce all the predictions of quantum mechanics, but no local hidden variables theory can. That was what Bell proved.

I agree that it is impossible to come up with such a dataset, but the existence or nonexistence of such a dataset has no relevance if you don't assume local hidden variables. You don't need such a dataset to reproduce the predictions of quantum mechanics.

If such a dataset is impossible then what dataset is being used to compare experiments to the inequalities, or are you now claiming that the experiments do not produce datasets?
 
  • #499
stevendaryl said:
No, I'm claiming that a non-local hidden variables theory can reproduce all the predictions of quantum mechanics, but no local hidden variables theory can. That was what Bell proved.

I agree that it is impossible to come up with such a dataset, but the existence or nonexistence of such a dataset has no relevance if you don't assume local hidden variables. You don't need such a dataset to reproduce the predictions of quantum mechanics.

Let me make this more explicit: In the spin-1/2 case, the predictions of quantum mechanics (assuming perfect detection, which is problematic, I guess) is:

P(\alpha, \beta) = \frac{1}{2} sin^2{\frac{1}{2}(\beta - \alpha)}

for the probability that Alice will measure spin-up at angle \alpha and Bob will measure spin-up at angle \beta. What Bell proved was that there is no way to simulate such a probability distribution using local hidden variables. There certainly is a way to simulate it using nonlocal interactions.
 
  • #500
stevendaryl said:
But if we allow for faster-than-light causal effects, then there is no reason to assume that such 6-tuples exist. Instead, we would have an 18-tuple:

P_{n,a,a}, P_{n,b,a}, P_{n,c,a}, <br /> P_{n,a,b}, P_{n,b,b}, P_{n,c,b},<br /> P_{n,a,c}, P_{n,b,c}, P_{n,c,c},<br /> P&#039;_{n,a,a}, P&#039;_{n,b,a}, P&#039;_{n,c,a}, <br /> P&#039;_{n,a,b}, P&#039;_{n,b,b}, P&#039;_{n,c,b},<br /> P&#039;_{n,a,c}, P&#039;_{n,b,c}, P&#039;_{n,c,c}
And which of them will you be substituting into Bell's inequalities to demonstrate violation?
 
  • #501
billschnieder said:
If such a dataset is impossible then what dataset is being used to compare experiments to the inequalities, or are you now claiming that the experiments do not produce datasets?

I have no idea what you are talking about. What it boils down to is that there is a joint probability distribution for Alice and Bob: P(\alpha, \beta) = \frac{1}{2} sin^2(\frac{1}{2} (\beta - \alpha)). This joint probability distribution gives rise to a particular correlation between Bob's measurements and Alice's measurements. This correlation can be tested experimentally, and the prediction is born out. So experiment confirms the predictions of quantum mechanics.

What Bell showed is that you can't simulate the joint probability distribution P(\alpha, \beta) by a "factored" distribution of the form

\int d\lambda P_A(\alpha, \lambda) P_B(\beta, \lambda) P_L(\lambda)

Bell's inequality gives a bound on the greatest correlation that can be simulated by "factored" probabilities of this form.

The dataset you are asking for is NOT what is measured in experiments. We already know ahead of time that there is no such dataset, so there's no point in testing that. What is measured in experimental is the correlations between Alice's and Bob's measurements.
 
  • #502
stevendaryl said:
I have no idea what you are talking about.
I didn't think so. See posts #424 and #492 in this thread.
What it boils down to is that there is a joint probability distribution for Alice and Bob: P(\alpha, \beta) = \frac{1}{2} sin^2(\frac{1}{2} (\beta - \alpha)). This joint probability distribution gives rise to a particular correlation between Bob's measurements and Alice's measurements. This correlation can be tested experimentally, and the prediction is born out. So experiment confirms the predictions of quantum mechanics.

What Bell showed is that you can't simulate the joint probability distribution P(\alpha, \beta) by a "factored" distribution of the form

\int d\lambda P_A(\alpha, \lambda) P_B(\beta, \lambda) P_L(\lambda)

Bell's inequality gives a bound on the greatest correlation that can be simulated by "factored" probabilities of this form.
I didn't want to get into this but you've made the error repeatedly. You do realize that the expression
\int d\lambda P_A(\alpha, \lambda) P_B(\beta, \lambda) P_L(\lambda)

is not a conditional probability statement but a statement for the expectation value of the paired product of outcomes at A and B, don't you. Where P_A(\alpha, \lambda) and P_B(\beta, \lambda) are functions generating +1 or -1, not probabilities. You should check Bell's original Paper.

The Expectation value for the paired product at two stations is necessarily factorable whether or not the processes generating the outcomes are local or non-local.
 
  • #503
billschnieder said:
And which of them will you be substituting into Bell's inequalities to demonstrate violation?

Bell's inequalities are not about probabilities, they are about correlations. The correlation C(\alpha, \beta) is equal to:

P_{both}(\alpha, \beta) + P_{neither}(\alpha, \beta) - P_{Alice}(\alpha, \beta) - P_{Bob}(\alpha, \beta)

where P_{both}(\alpha, \beta) is the probability that both Alice and Bob measure spin-up, P_{neither}(\alpha, \beta) is the probability than neither measure spin-up,
P_{Alice}(\alpha, \beta) is the probability that just Alice measures spin-up, and P_{Bob}(\alpha, \beta) is the probability that just Bob measures spin-up. Assuming perfect detection, the predictions of QM are:

P_{both}(\alpha, \beta) = \frac{1}{2} sin^2(\frac{1}{2}(\beta - \alpha))

P_{neither}(\alpha, \beta) = \frac{1}{2} sin^2(\frac{1}{2}(\beta - \alpha))

P_{Alice}(\alpha, \beta) = \frac{1}{2} cos^2(\frac{1}{2}(\beta - \alpha))

P_{Bob}(\alpha, \beta) = \frac{1}{2} cos^2(\frac{1}{2}(\beta - \alpha))

So the prediction of QM is:
C(\alpha, \beta) = sin^2(\frac{1}{2}(\beta - \alpha)) - cos^2(\frac{1}{2}(\beta - \alpha)) = 1 - 2 cos^2(\frac{1}{2}(\beta - \alpha))

What is measured in tests of Bell's inequality is C(\alpha, \beta).
 
  • #504
stevendaryl said:
The dataset you are asking for is NOT what is measured in experiments. We already know ahead of time that there is no such dataset, so there's no point in testing that. What is measured in experimental is the correlations between Alice's and Bob's measurements.

Huh? You do not know what you are talking about. Correlations are CALCULATED from the measured dataset not directly measured. What is measured are clicks at given detectors for individual photons.
 
  • #505
billschnieder said:
I didn't think so. See posts #424 and #492 in this thread.

I didn't want to get into this but you've made the error repeatedly. You do realize that the expression
\int d\lambda P_A(\alpha, \lambda) P_B(\beta, \lambda) P_L(\lambda)

is not a conditional probability statement but a statement for the expectation value of the paired product of outcomes at A and B, don't you. Where P_A(\alpha, \lambda) and P_B(\beta, \lambda) are functions generating +1 or -1, not probabilities. You should check Bell's original Paper.

I told you that I was NOT following Bell, but instead making a different, but related, probability claim. The perfect correlations for anti-aligned detectors shows that the probabilities P_A(\alpha, \lambda) and P_B(\beta, \lambda) must be 0 or 1. That means that the spin-up versus spin-down is a deterministic function of lambda, which is what Bell assumed. But you don't have to assume it, it's forced by the perfect anti-correlations.
 
Last edited:
  • #506
Correlations are CALCULATED from the measured dataset not directly measured. What is measured are clicks at given detectors for individual photons.

Whatever. The distinction between measuring and calculating from measurements is not the critical point. The correlation function is "measured" by computing:

\frac{1}{N} \sum S_{A,n} S_{B,n}

where S_{A,n} = +1 if Alice measures spin-up on run n and S_A = -1 if Alice measures spin-down on run n, and similarly for Bob, and N is the total number of runs.

But this data set is not the data set that one can easily prove does not exist (the factored probabilities).
 
  • #507
stevendaryl said:
What is measured in tests of Bell's inequality is C(\alpha, \beta).
Please you really should read an experimental description for an EPR type experiment. Using DA's diagram, what is measured is a series of time-tagged pluses and minuses depending on which detector fires D+ or D- at each station. So for a given angle setting "a", Alice has a long list of +1s and -1s which appear random each with a time tag indicating when the detector fired, and Bob has a similar list for setting "b". Then at the end of the experiment, the time tags are compared to figure out which ones were "coincident", then the results of each pair that belong to a "coincident" pair are multiplied to each other to obtain the product and then the average is calculated to obtain the expectation value of the paired product at the two stations, also known as the correlation.

So in bell test experiments, what you call C(\alpha, \beta) is actuall <ab> where a represents the outcomes at angle α and b represents the outcomes at angle β.
 
  • #508
stevendaryl said:
Whatever. The distinction between measuring and calculating from measurements is not the critical point. The correlation function is "measured" by computing:

\frac{1}{N} \sum S_{A,n} S_{B,n}

where S_{A,n} = +1 if Alice measures spin-up on run n and S_A = -1 if Alice measures spin-down on run n, and similarly for Bob, and N is the total number of runs.

But this data set is not the data set that one can easily prove does not exist (the factored probabilities).

You are still arguing that experiments produce something which is impossible. You can not argue that something is impossible and then also claim that non-local experiments produce it, whatever it is. Besides, the expectation values calculated from the experiment is clearly factorable yet the experiments violate the inequality. Go figure.
 
  • #509
billschnieder said:
Please you really should read an experimental description for an EPR type experiment. Using DA's diagram, what is measured is a series of time-tagged pluses and minuses depending on which detector fires D+ or D- at each station. So for a given angle setting "a", Alice has a long list of +1s and -1s which appear random each with a time tag indicating when the detector fired, and Bob has a similar list for setting "b". Then at the end of the experiment, the time tags are compared to figure out which ones were "coincident", then the results of each pair that belong to a "coincident" pair are multiplied to each other to obtain the product and then the average is calculated to obtain the expectation value of the paired product at the two stations, also known as the correlation.

So in bell test experiments, what you call C(\alpha, \beta) is actuall <ab> where a represents the outcomes at angle α and b represents the outcomes at angle β.

How is that different from what I said?
 
  • #510
billschnieder said:
You are still arguing that experiments produce something which is impossible. You can not argue that something is impossible and then also claim that non-local experiments produce it, whatever it is.

I'm not arguing that the correlations predicted by quantum mechanics are impossible, I'm arguing that it is impossible to achieve those correlations using a local hidden variables theory. You seem deeply confused about this point. As I said, the quantum joint probabilities
P(\alpha, \beta) are certainly possible, and the correlations are calculated from joint probabilities. But you cannot express the joint probability as a factored probability, which is what you would expect from a local hidden variables theory. Bell's inequalities are not impossible to violate, they are impossible to violate using factored probabilities.
 

Similar threads

Replies
80
Views
7K
  • · Replies 16 ·
Replies
16
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 333 ·
12
Replies
333
Views
18K
Replies
58
Views
4K
  • · Replies 14 ·
Replies
14
Views
4K
  • · Replies 75 ·
3
Replies
75
Views
11K
  • · Replies 47 ·
2
Replies
47
Views
5K
  • · Replies 22 ·
Replies
22
Views
33K
  • · Replies 19 ·
Replies
19
Views
2K