Assumptions of the Bell theorem

In summary: In fact, the whole point of doing so is to get rid of the probabilistic aspects.The aim of this thread is to make a list of all these additional assumptions that are necessary to prove the Bell theorem. An additional aim is to make the list of assumptions that are used in some but not all versions of the theorem, so are not really necessary.The list of necessary and unnecessary assumptions is preliminary, so I invite others to supplement and correct the list.
  • #1
Demystifier
Science Advisor
Insights Author
Gold Member
14,162
6,645
Loosely speaking, the Bell theorem says that any theory making the same measurable predictions as QM must necessarily be "nonlocal" in the Bell sense. (Here Bell locality is different from other notions of locality such as signal locality or locality of the Lagrangian. By the Bell theorem, I mean not only the original Bell inequality and its close cousin CHSH inequality, but also the results such as GHZ theorem and Hardy theorem which involve only equalities, not inequalities.) However, any such theorem actually uses some additional assumptions, so many people argue that it is some of those additional assumptions, not locality, that is violated by QM (and by Nature). The aim of this thread is to make a list of all these additional assumptions that are necessary to prove the Bell theorem. An additional aim is to make the list of assumptions that are used in some but not all versions of the theorem, so are not really necessary. The following list of necessary and unnecessary assumptions is supposed to be preliminary, so I invite others to supplement and correct the list.

Necessary assumptions:
- macroscopic realism (macroscopic measurement outcomes are objective, i.e. not merely a subjective experience of an agent)
- statistical independence of the choice of parameters (the choices of which observables will be measured by different apparatuses are not mutually correlated)
- Reichenbach common cause principle (if two phenomena are correlated, then the correlation is caused either by their mutual influence or by a third common cause)
- no causation backwards in time

Unnecessary assumptions:
- determinism (unnecessary because some versions of the theorem use only probabilistic reasoning)
- Kolmogorov probability axioms (unnecessary because the GHZ theorem uses only perfect correlations, i.e. does not use probabilistic reasoning at all)
- hidden/additional variables (unnecessary because some versions of the theorem, e.g. those by Mermin in Am. J. Phys., use only directly perceptible macroscopic phenomena)
- microscopic realism (unnecessary for the same reason as hidden/additional variables)
 
Last edited:
  • Like
Likes facenian, Lynch101, Fractal matter and 2 others
Physics news on Phys.org
  • #2
.
Demystifier said:
Unnecessary assumptions:

Demystifier said:
- microscopic realism
How ?

.
 
  • #3
Can you also give a precise statement of the theorem (or at least one version).
 
  • #4
Demystifier said:
- Reichenbach common cause principle (if two phenomena are correlated, then the correlation is caused either by their mutual influence or by a third common cause)

Demystifier said:
- hidden/additional variables (unnecessary because some versions of the theorem, e.g. those by Mermin in Am. J. Phys., use only directly perceptible macroscopic phenomena)

Anything that prevents us from just labeling the third common cause a "hidden variable" (hidden cause), or how would one distinguish the necessary from the unnecessary here?

/Fredrik
 
  • #5
Fra said:
Anything that prevents us from just labeling the third common cause a "hidden variable" (hidden cause), or how would one distinguish the necessary from the unnecessary here?
According to the Reichenbach principle one possibility is that the correlated phenomena directly influence each other without a third cause, so additional/hidden variable is not necessary.
 
  • #6
martinbn said:
Can you also give a precise statement of the theorem (or at least one version).
It is assumed that the reader has already seen at least one version and that he/she knows where to find it to refresh his/her memory. But if you insist, one good reference is
http://www.scholarpedia.org/article/Bell's_theorem
(Please, don't try to draw me into a "but it's not mathematically precise" kind of game, because I have no intention to play it!)
 
Last edited:
  • Haha
Likes romsofia
  • #7
physika said:
How ?
Already answered. If you want additional clarifications, please specify what exactly is not clear.
 
  • #8
Realism is a rather diffuse assumption. I'd consider myself a realist (even an utterly naive realist). But it depends on the answers to questions like "Are photons real?", "What kinds of properties do they have?", or "Is polarization intrinsic to a photon, or an attribute that comes into existence only through the interaction with a detector?". Physicists are conditioned to think of "objects" that are subject to equations of motion. This "Newtonian" world view is the root cause of the problem. I think the realism assumption should be sharpened to belief in objects having an existence continuous in time.

David Mermin once wrote that the problem is explaining the correlations without assuming that something carries information from the source to the detectors. The idea that photons carry polarization information is of course compelling. As compelling as the idea that light cannot propagate without an ether. Meanwhile we no longer need the ether. Photons, too, may turn out to be excessive metaphysical baggage. (Depending, of course, on the connotations of the term. The ether no longer exists, but has been transformed into what we now call "vacuum".)
 
  • #9
Some physicists say that the major difference between quantum mechanics and classical mechanics is that quantum mechanics is fundamentally probabilistic, while in classical mechanics, probability only arises through a lack of knowledge about initial conditions.

I don't think that this is a good description of the difference. There is no difficulty in extending our classical notions to include stochastic processes that are intrinsically nondeterministic. What's strange about quantum mechanics is the combination of probabilities with perfect correlations for distant measurements.

Suppose that Alice performs some action, and afterward, she knows something about what Bob will experience in the future. For example, let's suppose that Bob is planning to take an airplane flight later that day, and he will either get on the plane or not. Alice does something, and afterward, she knows that Bob definitely will not get on the plane.

Classically, we would say that there are two possibilities:

  1. Alice has done something to affect Bob's future.
  2. Alice has learned something about the past or present that will affect Bob's future.

For an example of the first possibility, suppose that Alice calls Bob and asks him not to take the flight. Then after the call, she knows (assuming that her arguments are persuasive enough) that he won't get on the plane.

For an example of the second possibility, suppose that Alice finds Bob's ID. He left it behind. Without an ID, he'll be unable to get on the plane.

The weird thing about quantum mechanics is that QM correlations seem to give a third possibility: That Alice can learn something about Bob's future, but it neither involves Alice affecting Bob, nor does it involve Alice learning anything about Bob's past or present that will affect Bob's future. In an EPR experiment with anti-correlated twin pairs, if Alice measures spin-up for her particle, she knows that Bob will measure spin-down for his particle, even though his measurement might be in the future. It's neither the case that Alice's measurement affects Bob, nor that Alice's measurement reveals something about Bob's past or present that will affect Bob.

Bell's inequality basically relies on the dichotomy that if Alice learns something about Bob's future, then either she is affecting his future, or she learned something about Bob's past or present that will affect his future. If Alice's and Bob's measurements are spacelike separated, then that implies either FTL or hidden variables.
 
  • Like
Likes eloheim, physika and Ege_O
  • #10
WernerQH said:
Realism is a rather diffuse assumption.

I don't think so. In “Niels Bohr and the Philosophy of Physics: Twenty-First-Century Perspectives” (edited by Jan Faye and Henry J. Folse), Arkady Plotnitsky defines realism in a useful way:

"I define 'realism' as a specific set of claims concerning what exists and, especially, how it exists. In this definition, any form of realism is more than only a claim concerning the existence, or reality, of something, but rather a claim concerning the character of this existence. Realist theories are sometimes also called ontological theories."
 
  • #11
Lord Jestocost said:
Arkady Plotnitsky defines realism in a useful way
This definition doesn't look very useful to me, since it just trades diffuseness over what "real" means for diffuseness over what "exists" means.
 
  • Like
Likes Ege_O
  • #12
Demystifier said:
Unnecessary assumptions:
- Kolmogorov probability axioms (unnecessary because the GHZ theorem uses only perfect correlations, i.e. does not use probabilistic reasoning at all)
If you don't have the Kolmogorov probability axioms, you can't formulate the concept of stochastic independence in the first place and there would be no notions of probabilistic causality and causal Markov conditions that could be tested and violated in principle. In fact, counterfactual propositions as in the GHZ theorem only make sense in terms of probabilities, since we can't test them in just a single run. After all, contextuality is a phenomenon that can only possibly appear in situations where not all measurements can be performed simultaneously. The presence of contextuality can thus only be detected in the statistics. The conclusion of a GHZ experiment is the same as for a Bell test experiment: If you require all observables to have simultaneously well-defined values, then unless you are willing to accept irregular explanations such as superdeterminism or FTL actions, you are forced to give up Kolmogorov probability.
 
  • Skeptical
Likes Demystifier
  • #14
Demystifier said:
According to the Reichenbach principle one possibility is that the correlated phenomena directly influence each other without a third cause, so additional/hidden variable is not necessary.
Isn't this effectively by definition a non-local causation?

Anyway, my vote goes for hidden variables! The problem isn't the predetermined correlation, I think the problem is the assumption that expected evolution of the entangled subsystems is dependent on the hidden varible. I think this is the error. I see if more so that a truly hidden variable, is also "screened" from interactions.

In this sense, "ignorance about initial conditions" is not a trued hidden variable. It is just regular ignorance. Bell theorem assumes that the third common cause takes the form of the exeprimenters ignorance. So I still suggest that QM can be "incomplete", but not in the sense of that "we are missing a mechanism that can be described by the physicists ignorance and thus be subject to bells theorem", its more a mechanism of how itneractions work between subsystems. IT might be that this simply can not be modeled in terms of a regular initial value ODE, it may require a modelling of hte form of interacting agents.

/Fredrik
 
  • #15
Demystifier said:
I disagree. For instance, Mermin in http://www.physics.smu.edu/scalise/P5382fa15/Mermin1990a.pdf
explicitly says the opposite in the last sentence of Introduction.

Probabilities don't come into play (since the relevant contradiction with hidden variables only uses 0/1 probabilities--impossibilities and certainties). However, the model is still nondeterministic in that there are multiple possible outcomes. The GHZ paradox, like the Bell inequality violations, shows that quantum nondeterminism cannot be explained in the classical way (through lack of information). At least without FTL, back-in-time or superdeterminism.

Just as a recap of GHZ (since I just learned about this):

There is a source of three entangled spin-1/2 particles.

There are three experimenters, Alice, Bob and Charlie, each of whom will measure one of the particle's spin relative to two possible axes: the x-axis, or the y-axis. Then the summary of the results are:

  • If all three experimenters choose to measure spin relative to the x-axis, then the results will either be 3 spin-up or 1 spin-up.
  • If only one experimenter chooses the x-axis, and the other two choose the y-axis, then the results will either be 0 spin-up or 2 spin-up.
  • We don't care about other detector choices.

So there is nondeterminism in the results. In the first case, there are 4 possible outcomes, either all three get spin-up, or only Alice does, or only Bob does, or only Charlie does. In the second case, again there are 4 possible outcomes, either all three get spin-down, or only Alice does, or only Bob does.

The impossibility of a hidden-variables explanation for this result relies on base-2 modular arithmetic.
Assume that for each entangled triple, there is an associated 6-tuple ##\langle A_x, A_y, B_x, B_y, C_x, C_y \rangle##. The interpretation is that ##A_x## is 0 to indicate Alice will measure spin-down in the x-direction, and ##A_x## is 1 to indicate Alice will measure spin-down. Similarly ##B_x, B_y## determine's Bob's results for each measurement choice, and ##C_x, C_y## indicates Charlie's results.

To satisfy the QM predictions, it would have to be that
  1. ##A_x + B_x + C_x = 1## (mod 2).
  2. ##A_x + B_y + C_y = 0## (mod 2)
  3. ##A_y + B_x + C_y = 0## (mod 2)
  4. ##A_y + B_y + C_x = 0## (mod 2)

That this is impossible follows from just adding those 4 equations together. The result:
##(A_x + B_x + C_x) + (A_x + B_y + C_y) + (A_y + B_x + C_y) + (A_y + B_y + C_x)##
##= 2 A_x + 2 B_x + 2 C_x + 2 A_y + 2 B_y + 2 C_y = 1## (mod 2)
which is clearly impossible.
 
  • Like
Likes eloheim, Ege_O and martinbn
  • #16
Demystifier said:
I disagree. For instance, Mermin in http://www.physics.smu.edu/scalise/P5382fa15/Mermin1990a.pdf
explicitly says the opposite in the last sentence of Introduction.
Please quote complete sentences. I wrote "Counterfactual propositions as in the GHZ theorem only make sense in terms of probabilities, since we can't test them in just a single run" and that is of course true, because their very nature is that they are counterfactual, i.e. they talk about something that would be if things were different. No propositions about a single run can be counterfactual. Hence, my sentence is an analytic statement.

The GHZ theorem uses non-commuting variables such as ##\sigma_x^a## and ##\sigma_y^a\sigma_x^b\sigma_y^c##, which can't be measured simultaneously in principle. (They don't commute, because ##\sigma_x^a## doesn't commute with ##\sigma_y^a##.) As a consequence, you can only detect discrepancies in the statistics of multiple runs. If you insist that ##\sigma_x^a## and ##\sigma_y^a\sigma_x^b\sigma_y^c## have simultaneously well-defined values, then you have thrown Kolmogorov probability down the garbage can. This is also not in contradiction with Mermin, who just gives a nice presentation of the GHZ theorem.
 
  • Like
Likes Ege_O
  • #17
Moreover, the GHZ theorem is just about quantum mechanics. Bell-type theorems work in two steps:
  1. Use Komogorov probability and the causal Markov condition (with regularity assumptions) to arrive at inequalities that hold classically.
  2. Prove that quantum mechanics violates these inequalities.
The GHZ theorem is just one way to show the second part. It does the same job as the calculation of the correlators ##\left<A(\alpha)B(\beta)\right>## that Bell did in his paper, just in a different situation, where certain observables are perfectly correlated.

But in and of itself, the GHZ theorem doesn't say anything spectacular. It's just one way of showing that quantum mechanics is contextual. The spectacular conclusion comes about as soon as you combine it with the first step, which clearly uses Kolmogorov probabilities.

How is "local causality" even defined if not in terms of Kolmogorov probabilities? And what would be the point of the GHZ result, if not to reject local causality (which is a probabilistic notion)?
 
  • #18
PeterDonis said:
This definition doesn't look very useful to me, since it just trades diffuseness over what "real" means for diffuseness over what "exists" means.
In “Niels Bohr and the Philosophy of Physics: Twenty-First-Century Perspectives” (edited by Jan Faye and Henry J. Folse), Arkady Plotnitsky defines "to exist" in the following way:

"By ‘reality’ I refer to that which exists or is assumed to exist, without making any claims concerning the character of this existence. I understand existence as the capacity to have effects upon the world with which we interact." [bold by LJ]
 
  • #19
Demystifier said:
Loosely speaking, the Bell theorem says that any theory making the same measurable predictions as QM must necessarily be "nonlocal" in the Bell sense. (Here Bell locality is different from other notions of locality such as signal locality or locality of the Lagrangian. By the Bell theorem, I mean not only the original Bell inequality and its close cousin CHSH inequality, but also the results such as GHZ theorem and Hardy theorem which involve only equalities, not inequalities.) However, any such theorem actually uses some additional assumptions, so many people argue that it is some of those additional assumptions, not locality, that is violated by QM (and by Nature). The aim of this thread is to make a list of all these additional assumptions that are necessary to prove the Bell theorem. An additional aim is to make the list of assumptions that are used in some but not all versions of the theorem, so are not really necessary. The following list of necessary and unnecessary assumptions is supposed to be preliminary, so I invite others to supplement and correct the list.

Necessary assumptions:
- macroscopic realism (macroscopic measurement outcomes are objective, i.e. not merely a subjective experience of an agent)
- statistical independence of the choice of parameters (the choices of which observables will be measured by different apparatuses are not mutually correlated)
- Reichenbach common cause principle (if two phenomena are correlated, then the correlation is caused either by their mutual influence or by a third common cause)
- no causation backwards in time

Unnecessary assumptions:
- determinism (unnecessary because some versions of the theorem use only probabilistic reasoning)
- Kolmogorov probability axioms (unnecessary because the GHZ theorem uses only perfect correlations, i.e. does not use probabilistic reasoning at all)
- hidden/additional variables (unnecessary because some versions of the theorem, e.g. those by Mermin in Am. J. Phys., use only directly perceptible macroscopic phenomena)
- microscopic realism (unnecessary for the same reason as hidden/additional variables)
Hi Demystifier, nice to talk to you after ten years. Anyway, what would you say about counterfactual definiteness?
 
  • Like
Likes Demystifier
  • #20
lugita15 said:
Hi Demystifier, nice to talk to you after ten years. Anyway, what would you say about counterfactual definiteness?

I think that focusing on counterfactual definiteness is a red herring. No probabilistic theory has counterfactual definiteness, while every deterministic theory does. So it's really about determinism versus nondeterminism.

Perfect correlations/anti-correlations seem to imply determinism and therefore counterfactual definiteness (because it's impossible to arrange them in a nondeterministic theory). But the violations of Bell's inequality (and even more starkly, the GHZ paradox) show that determinism can't be made to work, either. Not without FTL influences, back-in-time influences or superdeterminism.
 
  • #21
Demystifier said:
For instance, Mermin in http://www.physics.smu.edu/scalise/P5382fa15/Mermin1990a.pdf
explicitly says the opposite in the last sentence of Introduction.
In fact, if you read more than the introduction, you will see that Mermin agrees with what I wrote. In section II, he explains that you actually need many runs with different settings, just as I said. You first need to run experiments with the 122, 212 and 221 detector settings in order to arrive at a contradiction with the run from the 111 setting. What Mermin means with "single run" in the last sentence of the introduction is that a single run of the 111 experiment suffices to arrive at a contradiction with the 122, 212 and 221 runs. Nevertheless, you need many runs in total.

This shouldn't be controversial. After all, it's just standard quantum mechanics: If you have non-commuting observables, you can't get the full picture in just a single experiment. You need to perform the experiment again with different settings. That's what the phenomenon of contextuality is all about. There is a stochastic dependence of the measurement results on the apparatus settings that begs for an explanation, especially if the apparata are far away.

The only difference between a GHZ and a CHSH experiment is that in the CHSH setting, no probability is equal to 100%, while in the GHZ setting, one particular probability (the outcome of the 111 experiment) is equal to 100%, with the outcomes for the 122, 212 and 221 settings still being uncertain. The experiments only differ in terms of the numerical values of the predicted probability distributions. Nevertheless, you need to collect data from many runs in both experiments and then check, whether the statistics is consistent with the constraints imposed on the statistics by the local causality condition.
 
Last edited:
  • #22
Lord Jestocost said:
Arkady Plotnitsky defines "to exist" in the following way:

"By ‘reality’ I refer to that which exists or is assumed to exist, without making any claims concerning the character of this existence. I understand existence as the capacity to have effects upon the world with which we interact." [bold by LJ]
The bold part is a valid choice, corresponding to the word "Wirklichkeit" in German (from wirken=effect):
gentzen said:
... debates involving the word "reality" ... There are languages with different words for different aspects of it, for example German has Realität (from res=thing/stuff), Wirklichkeit (from wirken=effect), and Gegebenheit (from gegeben=given).
However, your previous quote gave the impression that his choice was closer to the word "Realität" in German (from res=thing/stuff): "I define 'realism' as a specific set of claims concerning what exists and, especially, how it exists. ... Realist theories are sometimes also called ontological theories."

I cannot believe that the meaning of "ontological theories" can be reduced to a mere instrumentalist "capacity to have effect" so nicely encapsulated by the word "Wirklichkeit" in German.
 
  • #23
lugita15 said:
Hi Demystifier, nice to talk to you after ten years. Anyway, what would you say about counterfactual definiteness?
Thanks, I think it should be added to the list of necessary assumptions.
 
  • Like
Likes gentzen
  • #24
stevendaryl said:
I think that focusing on counterfactual definiteness is a red herring. No probabilistic theory has counterfactual definiteness, while every deterministic theory does. So it's really about determinism versus nondeterminism.
I don't think so. Counterfactual definiteness is unrelated to determinism. It only means that it makes sense to talk about what if some facts were different. For example, what is the probability that I would have died last year if there was no Covid-19 pandemic? If that question has an answer, then it's counterfactual definiteness.
 
Last edited:
  • Like
Likes gentzen
  • #25
@Nullstein and @stevendaryl If we conclude that Kolmogorov probability axioms should be moved to the necessary list, then I have a question. Does it mean that experimental verification of classical deterministic physics also needs probability?
 
Last edited:
  • Like
Likes facenian
  • #26
Fra said:
Isn't this effectively by definition a non-local causation?
Yes it is. Is that a problem?
 
  • #27
Demystifier said:
I don't think so. Counterfactual definiteness is unrelated to determinism. It only means that it makes sense to talk about what if some facts were different. For example, what is the probability that I would have died last year if there was no Covid-19 pandemic? If that question has an answer, then it's counterfactual definiteness.
This sounds like the two different "interpretations" of probability. As descriptive, or as guiding probability.

The descriptive probability is essentially statistical, and not described the statistics of events that hasnt' happened.

The guiding interpretation, is the conditional one, that is more usefule in the qbist view. It is just an expectation of POSSIBLE outcomes (wether they actually has happened or not). The functional value of this measure, is not descriptive of the past, but predictive of the future, and thus guiding a possible agents actions.

I think this is rarely emphasized. The descriptive probability is the one that we compare with experiements. Ie. actual performed experiments. Here one does not speak about distributions of things that didnt happen, or wasnt observed.

But the predictive probabiltiy is in principle indirectly verified via the behaviour of say qbist agents. And in the extensionI link this to the hamiltonian and lagrangian of systems. Ie. to find out how matter interact, does indirectly unravel their intrinsic guiding probabilities.

So one type of probability is the one we "measure", the other one is "abduced to best explanation". The latter one is harder to understand though. This is IMO one reason probability is confusing in QM. The understanding of this is IMO unfinished, it's half baked only.

/Fredrik
 
  • #28
Demystifier said:
@Nullstein and @stevendaryl If we conclude that Kolmogorov probability axioms should be moved to the necessary list, then I have a question. Does it mean that experimental verification of classical deterministic physics also needs probability?
I'm not sure whether I understand the question. In a classical deterministic world, probabilities appear, because we can never know the initial conditions exactly. We don't need them in the formulation of the theory itself.

However, this thread is about Bell-type theorems and they all have in common that they derive consequences from probabilistic causality, which of course requires classical probability for its formulation. So if you specifically want to test whether your deterministic theory obeys probabilistic causality, you first need to derive probabilistic propositions from it, e.g. correlations.
 
  • #29
Demystifier said:
- statistical independence of the choice of parameters (the choices of which observables will be measured by different apparatuses are not mutually correlated)
Your explanation tries to suppress an explicit reference to hidden variables. Maybe that can be done, but your explanation seems to suppress too much. Here is one explanation why: The mutual correlation (and its absence) between which observables have been measured by different apparatuses is part of (i.e. can be computed from) the "recordable" results of experiments.
 
  • #30
Demystifier said:
- macroscopic realism (macroscopic measurement outcomes are objective, i.e. not merely a subjective experience of an agent)
This assumption can probably be weakened a la Asher Peres: Macroscopic intersubjectivity. A measurement outcome might not be objective when we consider "possible" super observers larger than the universe, but all human observers will agree on the measurement outcome.
 
  • Like
Likes Demystifier
  • #31
Morbert said:
This assumption can probably be weakened a la Asher Peres: Macroscopic intersubjectivity. A measurement outcome might not be objective when we consider "possible" super observers larger than the universe, but all human observers will agree on the measurement outcome.
What's the point of introducing the superobserver? Otherwise I would answer - to discuss the many world interpretation, but it doesn't look like something that Peres would propose.
 
Last edited:
  • #32
Demystifier said:
Yes it is. Is that a problem?
Reichenbach common cause principle only applies to events in the same event space, right?

As I see the physical and experimental construction of the eventspaces or ensembles, the observational events at Alice, Bob and the hypothetical sampling of the hidden variable does not belong to the same event space. I see it as a fallacious deduction of the bell type causality where the events at Alice and Bob indt oependently is a function of the hidden cause.

Ie. the premises of the "bell realism" on how causation in nature worked is likely the main problen, rather than non-locality, which I think was the OT?

/Fredrik
 
  • #33
Demystifier said:
What's the point of introducing the superobserver?
If there are superobservers, facts for ordinary observers are not necessarily facts for superobservers. Ordinary facts then are not objective but only objective FAPP of ordinary observers, i.e. intersubjective. One could weaken the assumption like this but since all physics experiments will be performed by Earth-scale observers for the forseeable future, I think this is a minor point.

On conceptual grounds, this makes room for a super-macroscopic Copenhagen-like interpretation which has objective superfacts but not objective ordinary facts. But since one is immediately lead to ask about the existence of super-superobservers who might question the superfacts I don't think it is interesting to squeeze this kind of interpretation in-between Copenhagen classic and Many Worlds.
 
Last edited:
  • Haha
Likes eloheim
  • #34
To me, the assumptions behind Bell's theorem are essentially what Einstein argued in the original EPR paper: If you can make a definite prediction about the outcome of a distant measurement, then it means either that the outcome was determined before the prediction was made, or (and this possibility implies FTL influences) the act of prediction itself affected the outcome.
 
  • Like
Likes DrChinese
  • #35
stevendaryl said:
To me, the assumptions behind Bell's theorem are essentially what Einstein argued in the original EPR paper: If you can make a definite prediction about the outcome of a distant measurement, then it means either that the outcome was determined before the prediction was made, or (and this possibility implies FTL influences) the act of prediction itself affected the outcome.
What I find imprecise here is what is meant by prediction. Usually you take as initial data all the values of all fields on a space-like hypersurface and determine their values for the following surfaces in the foliation. But in EPR, you are given the value at one point on a space-like hypersurface and determine a value at a different point on the same surface. Why is that called prediction?!
 

Similar threads

  • Quantum Interpretations and Foundations
10
Replies
333
Views
11K
  • Quantum Interpretations and Foundations
Replies
2
Views
749
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
  • Quantum Interpretations and Foundations
2
Replies
44
Views
1K
  • Quantum Interpretations and Foundations
Replies
6
Views
1K
  • Quantum Interpretations and Foundations
7
Replies
226
Views
18K
  • Quantum Interpretations and Foundations
6
Replies
175
Views
6K
  • Quantum Interpretations and Foundations
5
Replies
153
Views
5K
  • Quantum Interpretations and Foundations
7
Replies
228
Views
12K
  • Quantum Interpretations and Foundations
Replies
19
Views
1K
Back
Top