A Assumptions of the Bell theorem

  • #31
Morbert said:
This assumption can probably be weakened a la Asher Peres: Macroscopic intersubjectivity. A measurement outcome might not be objective when we consider "possible" super observers larger than the universe, but all human observers will agree on the measurement outcome.
What's the point of introducing the superobserver? Otherwise I would answer - to discuss the many world interpretation, but it doesn't look like something that Peres would propose.
 
Last edited:
Physics news on Phys.org
  • #32
Demystifier said:
Yes it is. Is that a problem?
Reichenbach common cause principle only applies to events in the same event space, right?

As I see the physical and experimental construction of the eventspaces or ensembles, the observational events at Alice, Bob and the hypothetical sampling of the hidden variable does not belong to the same event space. I see it as a fallacious deduction of the bell type causality where the events at Alice and Bob indt oependently is a function of the hidden cause.

Ie. the premises of the "bell realism" on how causation in nature worked is likely the main problen, rather than non-locality, which I think was the OT?

/Fredrik
 
  • #33
Demystifier said:
What's the point of introducing the superobserver?
If there are superobservers, facts for ordinary observers are not necessarily facts for superobservers. Ordinary facts then are not objective but only objective FAPP of ordinary observers, i.e. intersubjective. One could weaken the assumption like this but since all physics experiments will be performed by Earth-scale observers for the forseeable future, I think this is a minor point.

On conceptual grounds, this makes room for a super-macroscopic Copenhagen-like interpretation which has objective superfacts but not objective ordinary facts. But since one is immediately lead to ask about the existence of super-superobservers who might question the superfacts I don't think it is interesting to squeeze this kind of interpretation in-between Copenhagen classic and Many Worlds.
 
Last edited:
  • Haha
Likes eloheim
  • #34
To me, the assumptions behind Bell's theorem are essentially what Einstein argued in the original EPR paper: If you can make a definite prediction about the outcome of a distant measurement, then it means either that the outcome was determined before the prediction was made, or (and this possibility implies FTL influences) the act of prediction itself affected the outcome.
 
  • Like
Likes DrChinese
  • #35
stevendaryl said:
To me, the assumptions behind Bell's theorem are essentially what Einstein argued in the original EPR paper: If you can make a definite prediction about the outcome of a distant measurement, then it means either that the outcome was determined before the prediction was made, or (and this possibility implies FTL influences) the act of prediction itself affected the outcome.
What I find imprecise here is what is meant by prediction. Usually you take as initial data all the values of all fields on a space-like hypersurface and determine their values for the following surfaces in the foliation. But in EPR, you are given the value at one point on a space-like hypersurface and determine a value at a different point on the same surface. Why is that called prediction?!
 
  • #36
Bell essentially took EPR's informal idea and put it into mathematical language, so it could be analyzed quantitatively and compared to experiments. That also made it possible to come up with several loopholes (most of which have been excluded by experiments so far) that EPR (and even Bell) missed, e.g. superdeterminism. EPR had also missed the possibility of contextual observables, although it doesn't constitute a loophole in and of itself.
 
  • #37
What I find imprecise here is what is meant by prediction. Usually you take as initial data all the values of all fields on a space-like hypersurface and determine their values for the following surfaces in the foliation. But in EPR, you are given the value at one point on a space-like hypersurface and determine a value at a different point on the same surface. Why is that called prediction?!


I’m not sure what else to call it. Alice is sitting in her lab, making her spin measurements, and far away, Bob is making his measurements. Eventually, they will get back together and compare results. Alice is trying to predict what Bob’s results will be, in the sense that she can write down a statement: “I think Bob is going to measure spin-down in the z-direction”. Then when they get together, they can see if Alice was right, or not.

I don’t see that there is any ambiguity about what prediction means. I guess, since Bob’s measurement is (or can be) at a spacelike separation, then it’s not actually a “pre” diction. So you can call it something else. It’s “pre” in the sense that the statement is made before they get together to compare results.
 
  • #38
This is a terminology thing. If an astrophysicist studies a distant star, and decides that it is about to undergo a supernova explosion, people might call that a “prediction”, even though the explosion may have happened a million years ago (in the coordinate system used by the astrophysicist).
 
  • #39
martinbn said:
What I find imprecise here is what is meant by prediction. Usually you take as initial data all the values of all fields on a space-like hypersurface and determine their values for the following surfaces in the foliation. But in EPR, you are given the value at one point on a space-like hypersurface and determine a value at a different point on the same surface. Why is that called prediction?!
Because you can verify that value later. The speed of light is 300.000 km/s, so if that point is 300.000 km apart from you, then you can predict now what your observation will be 1 second later.
 
Last edited:
  • #40
Nullstein said:
In fact, counterfactual propositions as in the GHZ theorem only make sense in terms of probabilities, since we can't test them in just a single run. After all, contextuality is a phenomenon that can only possibly appear in situations where not all measurements can be performed simultaneously. The presence of contextuality can thus only be detected in the statistics. The conclusion of a GHZ experiment is the same as for a Bell test experiment: If you require all observables to have simultaneously well-defined values, then unless you are willing to accept irregular explanations such as superdeterminism or FTL actions, you are forced to give up Kolmogorov probability.
I always understood that GHZ is an example where a single measurement rules out local realism.

I would also like to point out that Demystifier has "no backwards in time causality (or effects)" as an assumption. That assumption is certainly questionable.
 
  • Like
Likes Demystifier
  • #41
DrChinese said:
I always understood that GHZ is an example where a single measurement rules out local realism.
Well, as I mentioned in a previous post, the GHZ set-up involves 3 particles and three measurements (one for each particle), and each measurement has two possible settings, for the x-component of spin or the y-component.

With the particular GHZ state for the particles, you can establish using QM that:

1. If all three measurements are made for the x-component of spin, then there will be an odd number of spin-up results.

2. If two measurements are for the y-component of spin, and the third is for the x-component, then there will be an even number of spin-up results.

Establishing 1&2 empirically cannot be done with a single measurement.
 
  • Like
Likes Demystifier
  • #42
DrChinese said:
I always understood that GHZ is an example where a single measurement rules out local realism.
You need more than one measurement, because you need to check for more than one set of measurement settings. QM predicts the outcome for the 111 settings with certainty, so if you are confident that you don't make measurement errors, then you only need to check that set of settings once. However, you only get a contradiction with local realism, if you also check the other settings 221, 212 and 122, which gives you a minimum of 4 measurements you need to perform. Moreover, the results of the 221, 212 and 122 experiments are probabilistic (i.e. not with probability 100%), so 4 measurements still aren't enough. (Also, in order to rule out local realism, the measurement settings need to be chosen independently, depending only on localized random number generators, so it can happen that settings such as 112 or 222 get chosen, which increases the amount of required measurements even more.)

It's very analogous to a CHSH experiment. You need to perform the experiment with different sets of detector settings. The difference is that in a CHSH experiment, all results are probabilistic, while in a GHZ experiment, one (but only one) of the probabilities is 100%.
 
Last edited:
  • #43
stevendaryl said:
Well, as I mentioned in a previous post, the GHZ set-up involves 3 particles and three measurements (one for each particle), and each measurement has two possible settings, for the x-component of spin or the y-component.

With the particular GHZ state for the particles, you can establish using QM that:

1. If all three measurements are made for the x-component of spin, then there will be an odd number of spin-up results.

2. If two measurements are for the y-component of spin, and the third is for the x-component, then there will be an even number of spin-up results.

Establishing 1&2 empirically cannot be done with a single measurement.
Yes, but for each measurement setup a single measurement is enough.
 
  • #44
Nullstein said:
Moreover, the results of the 221, 212 and 122 experiments are probabilistic (i.e. not with probability 100%), so 4 measurements still aren't enough.
While individual spins in 221 (the same is valid for 212 and 122) are uncertain, their product is certain. Each time you measure it, you obtain the same product. And it is this product that plays the decisive role in the GHZ contradiction. In the theoretical derivation of the GHZ contradiction the probability is not used (see e.g. page 12 of my http://thphys.irb.hr/wiki/main/images/a/a1/QFound2.pdf). So I would say that probability is not used in the theorem, even if an experimentalist uses some probability in practice.
 
  • #45
Fra said:
Reichenbach common cause principle only applies to events in the same event space, right?

As I see the physical and experimental construction of the eventspaces or ensembles, the observational events at Alice, Bob and the hypothetical sampling of the hidden variable does not belong to the same event space. I see it as a fallacious deduction of the bell type causality where the events at Alice and Bob indt oependently is a function of the hidden cause.

Ie. the premises of the "bell realism" on how causation in nature worked is likely the main problen, rather than non-locality, which I think was the OT?
/Fredrik

I'm not sure I understand the point you're making, but I think I agree that the weirdness of quantum mechanics is not its nonlocality. The only relevance of nonlocality is that it shows that that weirdness can't easily be explained in terms of a nonweird "deeper theory".
 
  • Like
Likes martinbn
  • #46
stevendaryl said:
I'm not sure I understand the point you're making, but I think I agree that the weirdness of quantum mechanics is not its nonlocality. The only relevance of nonlocality is that it shows that that weirdness can't easily be explained in terms of a nonweird "deeper theory".
I don't understand this. If nonlocality is not weird, then what exactly is weird about candidate deeper theories?
 
  • Like
Likes vanhees71
  • #47
Demystifier said:
I don't understand this. If nonlocality is not weird, then what exactly is weird about candidate deeper theories?
Which candidate theories do you have in mind?
 
  • #48
Demystifier said:
I don't understand this. If nonlocality is not weird, then what exactly is weird about candidate deeper theories?

I tend to lump all loopholes to Bell's theorem into the "weird" category: FTL, back-in-time, superdeterminism, many-worlds. They are all weird in terms of Einstein's hopes for what an ultimate theory of physics would look like.

My point is that the puzzling aspects of quantum mechanics are actually manifested in local experiments: the fact that measurement always gives an eigenvalue, entanglement, the fact that after a measurement, the system seems to "collapse" into an eigenstate. But if you restrict yourself to local experiments, you might hope for a local explanation for all this. Maybe, for some reason, non-eigenstates are dynamically unstable, and so the system falling into an eigenstate is the same sort of phenomenon as -- I don't know, maybe the fact that a coin is always seen to land on one side or the other, and not the edge.

But when you introduce entanglement between distant systems, the dynamical explanation for always measuring an eigenstate becomes more difficult. You have to drastically modify known physics to make it work. Adding FTL influences counts as a drastic modification, in my opinion.
 
  • #49
Demystifier said:
While individual spins in 221 (the same is valid for 212 and 122) are uncertain, their product is certain. Each time you measure it, you obtain the same product. And it is this product that plays the decisive role in the GHZ contradiction. In the theoretical derivation of the GHZ contradiction the probability is not used (see e.g. page 12 of my http://thphys.irb.hr/wiki/main/images/a/a1/QFound2.pdf). So I would say that probability is not used in the theorem, even if an experimentalist uses some probability in practice.
Oh, you are right, the products for the 221, 212 and 122 settings are also certain. However, that still gives you a total of 4 measurements to perform (still more than one).

But you are avoiding the central question: So far, we have just computed the quantum mechanical predictions. But that's only half of the problem. After computing the QM predictions, we want to show that they contradict something. It seems that you disagree that the contradiction we're looking for is the contradiciton with the local causality condition (which is probabilistic). So what other notion of causality do you have in mind that the GHZ experiment could contradict?
 
  • #50
Nullstein said:
Oh, you are right, the products for the 221, 212 and 122 settings are also certain. However, that still gives you a total of 4 measurements to perform (still more than one).

I don't understand how we're counting the "number of measurements". The claim that (for a particular setting of the three detectors), the product of the spin results is guaranteed to be a certain value can be falsified by a single experiment. But the claim that the measurement results are due to hidden variables cannot be falsified by a single experiment. So it seems to me incorrect to say that a single measurement, or even 4 measurements, can disprove local hidden variables.

[added]
In the GHZ experiment, you have the predictions of QM. You can prove, without doing any experiments at all, that these predictions are inconsistent with a local hidden variable theory. But that still leaves two possibilities open:
  1. QM is wrong.
  2. Local hidden variables is wrong.
In case 1, a single measurement could show it. In case 2, it seems that it can only be shown statistically. Which is the same situation as the usual EPR paradox. In both cases, disproving QM can be done with a single measurement, but disproving local hidden variables requires many measurements.
 
  • #51
stevendaryl said:
I don't understand how we're counting the "number of measurements". The claim that (for a particular setting of the three detectors), the product of the spin results is guaranteed to be a certain value can be falsified by a single experiment. But the claim that the measurement results are due to hidden variables cannot be falsified by a single experiment. So it seems to me incorrect to say that a single measurement, or even 4 measurements, can disprove local hidden variables.

[added]
In the GHZ experiment, you have the predictions of QM. You can prove, without doing any experiments at all, that these predictions are inconsistent with a local hidden variable theory. But that still leaves two possibilities open:
  1. QM is wrong.
  2. Local hidden variables is wrong.
In case 1, a single measurement could show it. In case 2, it seems that it can only be shown statistically. Which is the same situation as the usual EPR paradox. In both cases, disproving QM can be done with a single measurement, but disproving local hidden variables requires many measurements.
Yes, the "single measurement" issue is really a side note of little importance. I just wanted to make clear that it's really more than one (at least 4) measurements that are required due to the contextual nature of the experiment.

The central issue is: What are we actually trying to arrive a contradiction with? And my answer would be: The notion of local causality (which requires probabilities to formulate and thus statistics to falsify). So far, Demystifier has dodged that question, but if he was right, he would have to specify another notion of local causality that doesn't require probabilities for its formulation.
 
  • #52
Nullstein said:
It seems that you disagree that the contradiction we're looking for is the contradiciton with the local causality condition (which is probabilistic). So what other notion of causality do you have in mind that the GHZ experiment could contradict?
I disagree that local causality condition is necessarily probabilistic. In classical relativistic theory, for instance, it is deterministic. GHZ could contradict deterministic local causality condition.
 
  • #53
Demystifier said:
I disagree that local causality condition is necessarily probabilistic. In classical relativistic theory, for instance, it is deterministic. GHZ could contradict deterministic local causality condition.
So what is the local causality condition in your opinion? Can you write down a formula?
 
  • #54
Demystifier said:
I don't understand this. If nonlocality is not weird, then what exactly is weird about candidate deeper theories?
You know the answer: There's nothing weird about quantum theory. It's just our classical prejudice that's weird!
 
  • #55
Nullstein said:
So what is the local causality condition in your opinion? Can you write down a formula?
I explained it qualitatively in my lectures (that I linked in another post) at page 13, but I have to think how to write it down with a formula.
 
  • #56
vanhees71 said:
You know the answer: There's nothing weird about quantum theory. It's just our classical prejudice that's weird!
I know your answer, but I wanted to see his answer.
 
  • Like
Likes vanhees71
  • #57
martinbn said:
Which candidate theories do you have in mind?
Bohmian, many worlds, GRW, ...
 
  • #58
vanhees71 said:
It's just our classical prejudice that's weird!
In the first post I made a list of necessary assumptions. Which of them, in your opinion, is wrong?
 
  • #59
Demystifier said:
I explained it qualitatively in my lectures (that I linked in another post) at page 13, but I have to think how to write it down with a formula.
Well, the problem is that the argument in your lecture is very informal, which is fine for an overview, but because of the informal nature, it misses lots of important details. For example, you haven't considered the possibility of superdeterminism or retrocausality and so on. That's what Bell gave us: He turned the informal EPR argument into a falsifiable formula with clearly defined assumptions.

I think the reason for why people have settled for probabilistic causality is much deeper: How can you test causality even in principle, if you aren't allowed to repeat experiments and draw conclusions from the statistics? If B follows A just once, it could always be just a coincidence. In order to infer causality, you need something like: The occurrence of A increases the probability of the occurrence of B, while the occurrence of (not A) decreases the probability of the occurrence of B. (In fact, this is not enough. That's where Reichenbach enters the discussion.) This research has culminated today into the causal Markov condition.
 
  • #60
Demystifier said:
In the first post I made a list of necessary assumptions. Which of them, in your opinion, is wrong?
The problem with that list is that it excludes the classicality assumption. I think the answer by Copenhagenists would just be that classicality, in the form of Kolmogorov probability, would have to be rejected, i.e. the world is non-classical and therefore classical probability doesn't apply anymore. You can find research about this in the works of Redei and Hofer-Szabo. They generalize the common cause principle to the case of quantum probability. It reduces to the classical notion in the case of commuting observables.
 

Similar threads

  • · Replies 333 ·
12
Replies
333
Views
18K
  • · Replies 292 ·
10
Replies
292
Views
10K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 37 ·
2
Replies
37
Views
6K
  • · Replies 6 ·
Replies
6
Views
2K
Replies
119
Views
3K
  • · Replies 226 ·
8
Replies
226
Views
23K
Replies
44
Views
5K
  • · Replies 40 ·
2
Replies
40
Views
2K
  • · Replies 228 ·
8
Replies
228
Views
15K