Assumptions of the Bell theorem

In summary: In fact, the whole point of doing so is to get rid of the probabilistic aspects.The aim of this thread is to make a list of all these additional assumptions that are necessary to prove the Bell theorem. An additional aim is to make the list of assumptions that are used in some but not all versions of the theorem, so are not really necessary.The list of necessary and unnecessary assumptions is preliminary, so I invite others to supplement and correct the list.
  • #36
Bell essentially took EPR's informal idea and put it into mathematical language, so it could be analyzed quantitatively and compared to experiments. That also made it possible to come up with several loopholes (most of which have been excluded by experiments so far) that EPR (and even Bell) missed, e.g. superdeterminism. EPR had also missed the possibility of contextual observables, although it doesn't constitute a loophole in and of itself.
 
Physics news on Phys.org
  • #37
What I find imprecise here is what is meant by prediction. Usually you take as initial data all the values of all fields on a space-like hypersurface and determine their values for the following surfaces in the foliation. But in EPR, you are given the value at one point on a space-like hypersurface and determine a value at a different point on the same surface. Why is that called prediction?!

I’m not sure what else to call it. Alice is sitting in her lab, making her spin measurements, and far away, Bob is making his measurements. Eventually, they will get back together and compare results. Alice is trying to predict what Bob’s results will be, in the sense that she can write down a statement: “I think Bob is going to measure spin-down in the z-direction”. Then when they get together, they can see if Alice was right, or not.

I don’t see that there is any ambiguity about what prediction means. I guess, since Bob’s measurement is (or can be) at a spacelike separation, then it’s not actually a “pre” diction. So you can call it something else. It’s “pre” in the sense that the statement is made before they get together to compare results.
 
  • #38
This is a terminology thing. If an astrophysicist studies a distant star, and decides that it is about to undergo a supernova explosion, people might call that a “prediction”, even though the explosion may have happened a million years ago (in the coordinate system used by the astrophysicist).
 
  • #39
martinbn said:
What I find imprecise here is what is meant by prediction. Usually you take as initial data all the values of all fields on a space-like hypersurface and determine their values for the following surfaces in the foliation. But in EPR, you are given the value at one point on a space-like hypersurface and determine a value at a different point on the same surface. Why is that called prediction?!
Because you can verify that value later. The speed of light is 300.000 km/s, so if that point is 300.000 km apart from you, then you can predict now what your observation will be 1 second later.
 
Last edited:
  • #40
Nullstein said:
In fact, counterfactual propositions as in the GHZ theorem only make sense in terms of probabilities, since we can't test them in just a single run. After all, contextuality is a phenomenon that can only possibly appear in situations where not all measurements can be performed simultaneously. The presence of contextuality can thus only be detected in the statistics. The conclusion of a GHZ experiment is the same as for a Bell test experiment: If you require all observables to have simultaneously well-defined values, then unless you are willing to accept irregular explanations such as superdeterminism or FTL actions, you are forced to give up Kolmogorov probability.
I always understood that GHZ is an example where a single measurement rules out local realism.

I would also like to point out that Demystifier has "no backwards in time causality (or effects)" as an assumption. That assumption is certainly questionable.
 
  • Like
Likes Demystifier
  • #41
DrChinese said:
I always understood that GHZ is an example where a single measurement rules out local realism.
Well, as I mentioned in a previous post, the GHZ set-up involves 3 particles and three measurements (one for each particle), and each measurement has two possible settings, for the x-component of spin or the y-component.

With the particular GHZ state for the particles, you can establish using QM that:

1. If all three measurements are made for the x-component of spin, then there will be an odd number of spin-up results.

2. If two measurements are for the y-component of spin, and the third is for the x-component, then there will be an even number of spin-up results.

Establishing 1&2 empirically cannot be done with a single measurement.
 
  • Like
Likes Demystifier
  • #42
DrChinese said:
I always understood that GHZ is an example where a single measurement rules out local realism.
You need more than one measurement, because you need to check for more than one set of measurement settings. QM predicts the outcome for the 111 settings with certainty, so if you are confident that you don't make measurement errors, then you only need to check that set of settings once. However, you only get a contradiction with local realism, if you also check the other settings 221, 212 and 122, which gives you a minimum of 4 measurements you need to perform. Moreover, the results of the 221, 212 and 122 experiments are probabilistic (i.e. not with probability 100%), so 4 measurements still aren't enough. (Also, in order to rule out local realism, the measurement settings need to be chosen independently, depending only on localized random number generators, so it can happen that settings such as 112 or 222 get chosen, which increases the amount of required measurements even more.)

It's very analogous to a CHSH experiment. You need to perform the experiment with different sets of detector settings. The difference is that in a CHSH experiment, all results are probabilistic, while in a GHZ experiment, one (but only one) of the probabilities is 100%.
 
Last edited:
  • #43
stevendaryl said:
Well, as I mentioned in a previous post, the GHZ set-up involves 3 particles and three measurements (one for each particle), and each measurement has two possible settings, for the x-component of spin or the y-component.

With the particular GHZ state for the particles, you can establish using QM that:

1. If all three measurements are made for the x-component of spin, then there will be an odd number of spin-up results.

2. If two measurements are for the y-component of spin, and the third is for the x-component, then there will be an even number of spin-up results.

Establishing 1&2 empirically cannot be done with a single measurement.
Yes, but for each measurement setup a single measurement is enough.
 
  • #44
Nullstein said:
Moreover, the results of the 221, 212 and 122 experiments are probabilistic (i.e. not with probability 100%), so 4 measurements still aren't enough.
While individual spins in 221 (the same is valid for 212 and 122) are uncertain, their product is certain. Each time you measure it, you obtain the same product. And it is this product that plays the decisive role in the GHZ contradiction. In the theoretical derivation of the GHZ contradiction the probability is not used (see e.g. page 12 of my http://thphys.irb.hr/wiki/main/images/a/a1/QFound2.pdf). So I would say that probability is not used in the theorem, even if an experimentalist uses some probability in practice.
 
  • #45
Fra said:
Reichenbach common cause principle only applies to events in the same event space, right?

As I see the physical and experimental construction of the eventspaces or ensembles, the observational events at Alice, Bob and the hypothetical sampling of the hidden variable does not belong to the same event space. I see it as a fallacious deduction of the bell type causality where the events at Alice and Bob indt oependently is a function of the hidden cause.

Ie. the premises of the "bell realism" on how causation in nature worked is likely the main problen, rather than non-locality, which I think was the OT?
/Fredrik

I'm not sure I understand the point you're making, but I think I agree that the weirdness of quantum mechanics is not its nonlocality. The only relevance of nonlocality is that it shows that that weirdness can't easily be explained in terms of a nonweird "deeper theory".
 
  • Like
Likes martinbn
  • #46
stevendaryl said:
I'm not sure I understand the point you're making, but I think I agree that the weirdness of quantum mechanics is not its nonlocality. The only relevance of nonlocality is that it shows that that weirdness can't easily be explained in terms of a nonweird "deeper theory".
I don't understand this. If nonlocality is not weird, then what exactly is weird about candidate deeper theories?
 
  • Like
Likes vanhees71
  • #47
Demystifier said:
I don't understand this. If nonlocality is not weird, then what exactly is weird about candidate deeper theories?
Which candidate theories do you have in mind?
 
  • #48
Demystifier said:
I don't understand this. If nonlocality is not weird, then what exactly is weird about candidate deeper theories?

I tend to lump all loopholes to Bell's theorem into the "weird" category: FTL, back-in-time, superdeterminism, many-worlds. They are all weird in terms of Einstein's hopes for what an ultimate theory of physics would look like.

My point is that the puzzling aspects of quantum mechanics are actually manifested in local experiments: the fact that measurement always gives an eigenvalue, entanglement, the fact that after a measurement, the system seems to "collapse" into an eigenstate. But if you restrict yourself to local experiments, you might hope for a local explanation for all this. Maybe, for some reason, non-eigenstates are dynamically unstable, and so the system falling into an eigenstate is the same sort of phenomenon as -- I don't know, maybe the fact that a coin is always seen to land on one side or the other, and not the edge.

But when you introduce entanglement between distant systems, the dynamical explanation for always measuring an eigenstate becomes more difficult. You have to drastically modify known physics to make it work. Adding FTL influences counts as a drastic modification, in my opinion.
 
  • #49
Demystifier said:
While individual spins in 221 (the same is valid for 212 and 122) are uncertain, their product is certain. Each time you measure it, you obtain the same product. And it is this product that plays the decisive role in the GHZ contradiction. In the theoretical derivation of the GHZ contradiction the probability is not used (see e.g. page 12 of my http://thphys.irb.hr/wiki/main/images/a/a1/QFound2.pdf). So I would say that probability is not used in the theorem, even if an experimentalist uses some probability in practice.
Oh, you are right, the products for the 221, 212 and 122 settings are also certain. However, that still gives you a total of 4 measurements to perform (still more than one).

But you are avoiding the central question: So far, we have just computed the quantum mechanical predictions. But that's only half of the problem. After computing the QM predictions, we want to show that they contradict something. It seems that you disagree that the contradiction we're looking for is the contradiciton with the local causality condition (which is probabilistic). So what other notion of causality do you have in mind that the GHZ experiment could contradict?
 
  • #50
Nullstein said:
Oh, you are right, the products for the 221, 212 and 122 settings are also certain. However, that still gives you a total of 4 measurements to perform (still more than one).

I don't understand how we're counting the "number of measurements". The claim that (for a particular setting of the three detectors), the product of the spin results is guaranteed to be a certain value can be falsified by a single experiment. But the claim that the measurement results are due to hidden variables cannot be falsified by a single experiment. So it seems to me incorrect to say that a single measurement, or even 4 measurements, can disprove local hidden variables.

[added]
In the GHZ experiment, you have the predictions of QM. You can prove, without doing any experiments at all, that these predictions are inconsistent with a local hidden variable theory. But that still leaves two possibilities open:
  1. QM is wrong.
  2. Local hidden variables is wrong.
In case 1, a single measurement could show it. In case 2, it seems that it can only be shown statistically. Which is the same situation as the usual EPR paradox. In both cases, disproving QM can be done with a single measurement, but disproving local hidden variables requires many measurements.
 
  • #51
stevendaryl said:
I don't understand how we're counting the "number of measurements". The claim that (for a particular setting of the three detectors), the product of the spin results is guaranteed to be a certain value can be falsified by a single experiment. But the claim that the measurement results are due to hidden variables cannot be falsified by a single experiment. So it seems to me incorrect to say that a single measurement, or even 4 measurements, can disprove local hidden variables.

[added]
In the GHZ experiment, you have the predictions of QM. You can prove, without doing any experiments at all, that these predictions are inconsistent with a local hidden variable theory. But that still leaves two possibilities open:
  1. QM is wrong.
  2. Local hidden variables is wrong.
In case 1, a single measurement could show it. In case 2, it seems that it can only be shown statistically. Which is the same situation as the usual EPR paradox. In both cases, disproving QM can be done with a single measurement, but disproving local hidden variables requires many measurements.
Yes, the "single measurement" issue is really a side note of little importance. I just wanted to make clear that it's really more than one (at least 4) measurements that are required due to the contextual nature of the experiment.

The central issue is: What are we actually trying to arrive a contradiction with? And my answer would be: The notion of local causality (which requires probabilities to formulate and thus statistics to falsify). So far, Demystifier has dodged that question, but if he was right, he would have to specify another notion of local causality that doesn't require probabilities for its formulation.
 
  • #52
Nullstein said:
It seems that you disagree that the contradiction we're looking for is the contradiciton with the local causality condition (which is probabilistic). So what other notion of causality do you have in mind that the GHZ experiment could contradict?
I disagree that local causality condition is necessarily probabilistic. In classical relativistic theory, for instance, it is deterministic. GHZ could contradict deterministic local causality condition.
 
  • #53
Demystifier said:
I disagree that local causality condition is necessarily probabilistic. In classical relativistic theory, for instance, it is deterministic. GHZ could contradict deterministic local causality condition.
So what is the local causality condition in your opinion? Can you write down a formula?
 
  • #54
Demystifier said:
I don't understand this. If nonlocality is not weird, then what exactly is weird about candidate deeper theories?
You know the answer: There's nothing weird about quantum theory. It's just our classical prejudice that's weird!
 
  • #55
Nullstein said:
So what is the local causality condition in your opinion? Can you write down a formula?
I explained it qualitatively in my lectures (that I linked in another post) at page 13, but I have to think how to write it down with a formula.
 
  • #56
vanhees71 said:
You know the answer: There's nothing weird about quantum theory. It's just our classical prejudice that's weird!
I know your answer, but I wanted to see his answer.
 
  • Like
Likes vanhees71
  • #57
martinbn said:
Which candidate theories do you have in mind?
Bohmian, many worlds, GRW, ...
 
  • #58
vanhees71 said:
It's just our classical prejudice that's weird!
In the first post I made a list of necessary assumptions. Which of them, in your opinion, is wrong?
 
  • #59
Demystifier said:
I explained it qualitatively in my lectures (that I linked in another post) at page 13, but I have to think how to write it down with a formula.
Well, the problem is that the argument in your lecture is very informal, which is fine for an overview, but because of the informal nature, it misses lots of important details. For example, you haven't considered the possibility of superdeterminism or retrocausality and so on. That's what Bell gave us: He turned the informal EPR argument into a falsifiable formula with clearly defined assumptions.

I think the reason for why people have settled for probabilistic causality is much deeper: How can you test causality even in principle, if you aren't allowed to repeat experiments and draw conclusions from the statistics? If B follows A just once, it could always be just a coincidence. In order to infer causality, you need something like: The occurrence of A increases the probability of the occurrence of B, while the occurrence of (not A) decreases the probability of the occurrence of B. (In fact, this is not enough. That's where Reichenbach enters the discussion.) This research has culminated today into the causal Markov condition.
 
  • #60
Demystifier said:
In the first post I made a list of necessary assumptions. Which of them, in your opinion, is wrong?
The problem with that list is that it excludes the classicality assumption. I think the answer by Copenhagenists would just be that classicality, in the form of Kolmogorov probability, would have to be rejected, i.e. the world is non-classical and therefore classical probability doesn't apply anymore. You can find research about this in the works of Redei and Hofer-Szabo. They generalize the common cause principle to the case of quantum probability. It reduces to the classical notion in the case of commuting observables.
 
  • #61
Demystifier said:
In the first post I made a list of necessary assumptions. Which of them, in your opinion, is wrong?
I don't know about these philosophical assumptions, but classical physics is obviously wrong outside its validity range. It's an approximation of quantum physics with a limited validity range, while for QT so far the validity range is unknown (except the fact that it cannot satisfactorily describe gravitation). That's why QT has been discovered in 1925 (after 25 years of struggling with "old quantum theory").
 
  • #62
Nullstein said:
How can you test causality even in principle, if you aren't allowed to repeat experiments and draw conclusions from the statistics? If B follows A just once, it could always be just a coincidence.
Fine, but then probability is even needed to test classical deterministic theories.
 
  • #63
Nullstein said:
The problem with that list is that it excludes the classicality assumption.
The notion of "classicality" is too ambiguous to be mentioned on the list. In fact, any assumption on the list could be interpreted as a kind of "classicality".
Nullstein said:
I think the answer by Copenhagenists would just be that classicality, in the form of Kolmogorov probability, would have to be rejected,
That assumption is on the list, at least on the unnecessary one. But I see now that it is controversial whether it should be on the necessary or unnecessary list in the GHZ case, which is what we are discussing here.
 
  • #64
vanhees71 said:
I don't know about these philosophical assumptions
So you don't know what's the problem, but you know what's the solution. Lucky you! :-p
 
  • #65
Yes, the solution is to accept that Nature behaves as she behaves and that with the natural sciences we are able to find this out with astonishing detail and accuracy. The discovery of QT simply shows that a naive macroscopic worldview doesn't apply in the microscopic realm. That's not so surprising, is it? You don't need to make physics more complicated than it is by inventing some philosophical problems which cannot be unambiguously answered by the scientific method.

Bell's great achievement was to make such an ambiguous philosophical pseudo-problem decidable by the scientific method, i.e., by experiment, and all such Bell tests are in overwhelming favor for the predictions of quantum theory. So this problem is indeed solved and one can move on to apply the solution for 21st-century technology.
 
  • Like
Likes dextercioby
  • #66
gentzen said:
I cannot believe that the meaning of "ontological theories" can be reduced to a mere instrumentalist "capacity to have effect"

Agree fully.

an oversimplistitic functionalist criteria.

.
 
  • #67
vanhees71 said:
You know the answer: There's nothing weird about quantum theory. It's just our classical prejudice that's weird!
I think you are wrong about that. Unless you consider it prejudice to want a consistent theory.

Now, I don’t think that QM is inconsistent. But I think QM + no FTL influences + no backwards in time influences + no superdeterminism + no classical/quantum cut together are inconsistent.
 
  • Like
Likes Demystifier
  • #68
No QM is consistent without backwards in time influences (by construction there are none of this kind in the theory) and without classical/quantum cut (there's none known; quantum effects can be observed for ever larger systems). I don't know what superdeterminism is. For sure QM is causal but indeterministic.
 
  • #69
Demystifier said:
Fine, but then probability is even needed to test classical deterministic theories.
Yes, I would argue so. At least, if you're interested in uncovering causal relationships. Even in a deterministic theory, the answer to the question whether a specific circumstance if the cause of an effect in the future, requires the analysis of multiple alternatives and evaluating their chance to influence the future.

For example, planetary motion is (in an idealized setting) completely deterministic. Now you could ask the question: Does the presence of the moon cause Earth to orbit the sun or would Earth also orbit the sun if the moon wasn't there? You then have to analyze the deterministic equations for different initial conditions and you might then come to the conclusion that the presence of the moon doesn't significantly alter the probability of Earth orbiting sun. (There is even a celebrated result in this direction, known as the KAM theorem.)

Demystifier said:
The notion of "classicality" is too ambiguous to be mentioned on the list. In fact, any assumption on the list could be interpreted as a kind of "classicality".
That assumption is on the list, at least on the unnecessary one. But I see now that it is controversial whether it should be on the necessary or unnecessary list in the GHZ case, which is what we are discussing here.
I was thinking about classicality in the sense of Kolmogorovian probability in contrast to e.g. quantum probability. That means that a classical state is just characterized by an element of a set of states and probabilities just arise as probability distributions on this set.
 
  • Like
Likes dextercioby
  • #70
Nullstein said:
Kolmogorovian probability in contrast to e.g. quantum probability.
What do you mean by quantum probability? The Born rule satisfies Kolmogorov axioms of probability.
 
  • Like
Likes vanhees71

Similar threads

  • Quantum Interpretations and Foundations
10
Replies
333
Views
11K
  • Quantum Interpretations and Foundations
Replies
2
Views
744
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
  • Quantum Interpretations and Foundations
2
Replies
44
Views
1K
  • Quantum Interpretations and Foundations
Replies
6
Views
1K
  • Quantum Interpretations and Foundations
7
Replies
226
Views
18K
  • Quantum Interpretations and Foundations
6
Replies
175
Views
6K
  • Quantum Interpretations and Foundations
5
Replies
153
Views
5K
  • Quantum Interpretations and Foundations
7
Replies
228
Views
12K
  • Quantum Interpretations and Foundations
Replies
19
Views
1K
Back
Top