B Simple proof of Bell's theorem

  • #51
jeremyfiennes said:
Ok.

I interpret that to mean the original thread is now at a nice stopping point.

:biggrin: :biggrin: :biggrin:
 
Physics news on Phys.org
  • #52
DrChinese said:
zonde said:
But please explain how you understand assumption (1). Does this assumption include theories where measurement of one particle from entangled pair can instantaneously change "pre-existing" property of the other particle from the pair?
Yes, if the following are features as well:

"(1) all measurement outcomes are determined by pre-existing properties of particles independent of the measurement (realism); (2) physical states are statistical mixtures of subensembles with definite polarization, where (3) polarization is defined such that expectation values taken for each subensemble obey Malus’ law (that is, the well-known cosine dependence of the intensity of a polarized beam after an ideal polarizer)."
Leggett in his paper "Nonlocal Hidden-Variable Theories and Quantum Mechanics: An Incompatibility Theorem" had assumption 4.:
"4.
##A(a, b, \lambda : B)=A(a, b, \lambda),\;\; B(a, b, \lambda : A)=B(a, b, \lambda),\;\; (2.4)##
i.e., the outcome of the measurement of A is independent of the outcome at the distant station 2 and vice versa ("outcome-independence," cf. Jarrett(5))."

If we allow changing either function ##A(a, b, \lambda)## or ##B(a, b, \lambda)## based on who makes his measurement first we violate Leggett's assumption 4.
 
  • #53
DrChinese said:
And just to drive the point home: do you not see that the number of effects we term as "non-local" are limited? They are almost all centered around entangled systems with spatial extent. Spatially separated systems which are not entangled generally do NOT interact. If you say there are non-local effects in a candidate theory, you are compelled to explain how and why those effects are so incredibly limited (why doesn't everything affect everything, for example).
I would like say that this is really good question and it requires good answer. I have thought about it but I would like to hold to myself my speculations.
However I think that it is related to the question how there can be pure states (say coherent polarized beam of light) given entanglement phenomena.
 
  • #54
DrChinese said:
Seriously: what you have presented has no connection whatsoever to theoretical quantum mechanics
What I presented is no more or less than what I claimed in posts #36,38, and 40, i.e. if Alice and Bob can communicate at FTL (they are in effect no longer really separated) they can violate Bell's inequality (get the same correlations as QM) and maintain realism. You denied this, but you are right what Alice and Bob are doing has nothing to do with QM, so what.

If locality means no FTL communication, then if one says they give up locality then an interpretation of that is Alice and Bob can communicate at FTL. If all one means is that there is a non-local element then I agree with you.
 
Last edited:
  • #55
Zafa Pi said:
You denied this, but you are right what Alice and Bob are doing has nothing to do with QM, so what.

Well I think the point that Dr Chinese is trying to make is that the phenomenon of entanglement goes much deeper, and is more pervasive, than simply being able to violate a Bell inequality.

I've no doubt that one could construct some artificial piece of theory that would be non-local and realistic that would mimic the observations made in a specific Bell inequality experiment. It wouldn't look like physics as we know it (either classical or quantum) but would just be a theory specifically designed to reproduce the features of one experiment. Would the same theory then be able to explain the results from a GHZ state, say? Would the same theory then predict the phenomenon of entanglement swapping? And so on.

The thing is that with QM we have a single coherent and logical theory that explains all of this in one framework - we don't need to introduce all sorts of ad-hoc assumptions for each new experiment - everything follows from the few basic axioms and postulates of QM.

The closest we've got so far (to my knowledge) to a non-local realistic theory that reproduces all the results of QM is Bohmian mechanics - but that, like all of the interpretations of QM, has got its own 'weirdness' (the whole business of interpretation seems to me to be about shifting the awkward bits under the rug we're most comfortable with).
 
  • Like
Likes DrChinese
  • #56
I have heard of another non-local realistic theory which may not meet everyone's requirements for "realistic" but it does seem to illustrate the possibility.
  1. First, every point in space contains a copy of the wavefunction of the entire universe.
  2. Each copy updates unitarily.
  3. Whenever a measurement is made at one point, that copy is collapsed.
  4. Updates are broadcast to all other points.
Some time synchronisation protocols are probably necessary too, but I would think the above could reproduce the expectations of QM.
 
  • #57
DrChinese said:
I interpret that to mean the original thread is now at a nice stopping point.

:biggrin: :biggrin: :biggrin:
No, not just yet please! I now realize I am unclear on the meaning of "hidden variables". I had imagined these to be conceived, but at present immeasurable, properties of the observed object. From Nugatory's reply (#49), however, it seems that they can also be factors in the object's environment. In the analogy of a doctor measuring a patient's blood pressure (measuring the blood pressure of a patient having his blood pressure measured by a doctor), would it be true to say that:
-- unmeasured patient-associated variables (how well he slept, what he had for breakfast, etc) are "hidden"
-- unmeasured environment-associated variables (the temperature, noise level of the ward, etc.) are likewise "hidden"
-- their combined effect is "experimental error", and can be reduced by including the variables in the model
-- the doctor-effect is uncontrollable observer-dependent "uncertainty" -- the patient could react to a male doctor in one way, to a female doctor in another, and so on?

And that:
-- for realists, the patient has a real, doctor-independent blood-pressure, even though it cannot be determined
-- for positivists, it is meaningless to talk of a real blood-pressure, because it cannot be determined
-- for quantum physics, the real blood pressure is what it is measured to be, and before that did not exist?
 
  • #58
jeremyfiennes said:
-- for realists, the patient has a real, doctor-independent blood-pressure, even though it cannot be determined
-- for positivists, it is meaningless to talk of a real blood-pressure, because it cannot be determined
-- for quantum physics, the real blood pressure is what it is measured to be, and before that did not exist?
This is an unfortunately very confusing example, because all the sources of uncertainty you cite are not problems with the observable we're measuring (there's no problem with the manometer reading), but rather with how good a proxy that measurement is for what the doctor really wants to know, namely what level of treatment for hypertension is indicated. (Or, informally, not only have you not supplied a definition for "real blood pressure", you've made a pretty good case that the phrase is undefined).

For more helpful examples, you might try these three philosophical positions against two phenomena: thermodynamic pressure, for which we have an accepted hidden-variable theory; and quantum spin, for which we do not.
 
  • #59
jeremyfiennes said:
I had imagined [hidden variables] to be conceived, but at present immeasurable, properties of the observed object.
It's not "hidden variables", it's "hidden variable theory". The hidden variables are just whatever inputs a candidate theory uses to make predictions, so you can't say anything concrete about them except in the context of a particular candidate theory.

Bell's theorem operates, not by proving that there are no hidden variables, but by proving that all candidate theories that have a particular set of mathematical properties will fail. Hidden variable theories only come into the logic because it turns out that all local realistic hidden variable theories (for most generally accepted definitions of "local realistic hidden variable theory") happen to have those properties so are precluded.

(Do note, however, that the previous paragraph is running the history backwards. Bell started with that particular set of mathematical properties because they covered all possible LHV theories - that's what made them interesting)
 
Last edited:
  • #60
So how would a "hidden variable" in general be defined?
 
  • #61
jeremyfiennes said:
So how would a "hidden variable" in general be defined?
An input to your candidate theory.
 
  • #62
jeremyfiennes said:
On a more simplistic level, a standard formulation of Bell's Theorem (e.g. #35) is that "No physical theory of Local Hidden Variables can ever reproduce all of the predictions of Quantum Mechanics". Local Hidden Variables theories are however realistic, and give uniquely defined values. Whereas Quantum Mechanics' predictions are probabilistic. Does it not go without saying that no realistic theory can ever reproduce probabilistic results, and vice versa?

The connection between Bell's Theorem and determinism is muddled in many people's minds (But not mine!) When Einstein famously said "God does not play dice", he was expressing his conviction that the most fundamental theories should be deterministic. But it's perfectly possible to develop a notion of a locally realistic, nondeterministic system: It's just a stochastic process. However, it's straight-forward to see that if there is a locally realistic nondeterministic model of some system, then there is also a locally realistic deterministic model of the same system. You just assume that the nondeterminism is resolved by some hidden variable. So if you prove that there is no locally realistic deterministic model, then it also follows that there is no locally realistic nondeterministic model. Nondeterminism versus determinism is not particularly relevant. Bell's Theorem as he stated it only proves that there is no locally realistic deterministic model that explains EPR. But it's not too hard to show that there is no locally realistic nondeterministic model, either.
 
  • Like
Likes jeremyfiennes
  • #63
Nugatory said (#58):
"This is an unfortunately very confusing example, because all the sources of uncertainty you cite are not problems with the observable we're measuring (there's no problem with the manometer reading), ..."
The problem is with the manometer reading. A dour sour male doctor gets one reading. And to provide a double check, a sweet sugary female doctor attempts to replicate the result, and gets a totally different reading. How would QM quantify this?
 
  • #64
Nugatory said:
An input to your candidate theory.
An input to a candidate theory is not necessarily "hidden", making the term somewhat confusing.
 
  • #65
stevendaryl said:
Bell's Theorem as he stated it only proves that there is no locally realistic deterministic model that explains EPR. But it's not too hard to show that there is no locally realistic nondeterministic model, either.
I am bothered about reference to one hidden variable. What would prevent the situation where there was one real variable and a second random one?
 
  • Like
Likes edguy99
  • #66
stevendaryl said:
The connection between Bell's Theorem and determinism is muddled in many people's minds. When Einstein famously said "God does not play dice", he was expressing his conviction that the most fundamental theories should be deterministic.
This makes good sense to me. A model is something to which one inputs certain values and gets an output value (or values). A model for a weighing scale says that wout = win1 + win2 + ... In a non-deterministic system where win1 = 2 kg, win2 = 3..5 kg (somewhere between 3 and 5 kg), the model gives a non-deterministic output wout = 5..7 kg. Einstein's thesis was that if one had complete information on the hidden variables of body 2, for instance its volume and density, then one would get a deterministic system where win1 = 2 kg, win2 = 4 kg and a deterministic output wout = 6 kg. That is why he considered QM incomplete. And why I asked how a deterministic hidden variable model could predict stochastic results. But I agree that all this has nothing to do with Bell. What holds for deterministic systems also holds for the indeterminate variety.
 
  • #67
Thank you for replying, and hopefully you can clear some things up for me.
Simon Phoenix said:
Well I think the point that Dr Chinese is trying to make is that the phenomenon of entanglement goes much deeper, and is more pervasive, than simply being able to violate a Bell inequality.
That may well be, but what I am trying to figure out is what does it means to say "give up locality". A simple and common meaning of locality is no FTL influence or communication.
1) So to "give up locality" mean that FTL communication is possible, like my quikfone in post #40?
2) If not, why? (how does it conflict with nature?)
3) If so, does that not provide a non-local realistic way to replicate the correlations in any of the Bell examples (including the GHZ example, see post #40)?
 
  • #68
jeremyfiennes said:
The problem is with the manometer reading. A dour sour male doctor gets one reading. And to provide a double check, a sweet sugary female doctor attempts to replicate the result, and gets a totally different reading. How would QM quantify this?
It doesn't, it doesn't need to, and it shouldn't be expected to.

This is a classical situation. It's a very complicated classical problem with a lot of moving parts, and the identity of the technician is just one of an enormous number of potentially uncontrolled variables (there's an entire science around designing medical experiments to eliminate this sort of effect) but it's still a classical problem. The dour sour doctor measures my blood pressure, and gets one value. The friendly warm doctor measures it again a bit later and gets another value. Is there any sensible conclusion from this othar than that my blood pressure varies over time based on a lot of complicated considerations?

None of this has much to do with quantum mechanics, where the situation is that before the measurement the system is described by some state ##|\psi\rangle##; if we want to measure observable ##A## we write the state as ##\psi=c_1|\alpha_1\rangle+c_2|\alpha_2\rangle+c_3|\alpha_3\rangle+...## where ##A|\alpha_i\rangle=\alpha_i|\alpha_i\rangle##; then the probability of getting the result ##\alpha_i## is ##c_i^2##. That's a completely different problem.
 
Last edited:
  • #69
Jilang said:
I am bothered about reference to one hidden variable. What would prevent the situation where there was one real variable and a second random one?

There is no real distinction between one variable versus two or 100. You can always lump them altogether into a single variable. I don't see how it would make any difference.
 
  • #70
Simon Phoenix said:
Well I think the point that Dr Chinese is trying to make is that the phenomenon of entanglement goes much deeper, and is more pervasive, than simply being able to violate a Bell inequality.

I've no doubt that one could construct some artificial piece of theory that would be non-local and realistic that would mimic the observations made in a specific Bell inequality experiment. It wouldn't look like physics as we know it (either classical or quantum) but would just be a theory specifically designed to reproduce the features of one experiment. Would the same theory then be able to explain the results from a GHZ state, say? Would the same theory then predict the phenomenon of entanglement swapping? And so on.

The thing is that with QM we have a single coherent and logical theory that explains all of this in one framework - we don't need to introduce all sorts of ad-hoc assumptions for each new experiment - everything follows from the few basic axioms and postulates of QM.

The closest we've got so far (to my knowledge) to a non-local realistic theory that reproduces all the results of QM is Bohmian mechanics - but that, like all of the interpretations of QM, has got its own 'weirdness' (the whole business of interpretation seems to me to be about shifting the awkward bits under the rug we're most comfortable with).

You hit the nail on the head. These points are often overlooked. There is no question that non-locality of an ad hoc variety can lead you to a specific point. But why does nature stop where it does? Does the ad hoc theory explain that? QM does. We just don't know whether the underlying mechanism is non-local or non-realistic (or both).

The referenced article specifically assumes a particular definition of realism AND it specifically assumes the cos^2(theta) relationship of QM is to be recreated. From that they demonstrate a contradiction a la Leggett (not Bell). For non-local theories meeting that definition of realism, they are ruled out. Others that don't - such as BM (per Bohmians) - are not affected. Most Bohmians accept that BM is contextual, and therefore reject that article's definition of realism (as it requires non-contextuality).
 
  • #71
Zafa Pi said:
Thank you for replying, and hopefully you can clear some things up for me.

That may well be, but what I am trying to figure out is what does it means to say "give up locality". A simple and common meaning of locality is no FTL influence or communication.
1) So to "give up locality" mean that FTL communication is possible, like my quikfone in post #40?
2) If not, why? (how does it conflict with nature?)
3) If so, does that not provide a non-local realistic way to replicate the correlations in any of the Bell examples (including the GHZ example, see post #40)?

You may find this hard to accept, but all theories featuring non-local elements are not the same. Just saying it is non-local does not even come close to explaining Bell Inequality or Leggett Inequality violations. They might, but it really depends on the nature of the non-locality, don't you think? Perhaps there is signalling from Alice to Bob, but Bob does nothing on getting the signal. Or maybe Bob sometimes acts but not always. Maybe sometimes he listens to Chris and Dale instead of Alice. Who's to say? Ultimately you do when constructing such a theory, but until you do and present it, we can't really address it. The point is: what are the parameters of your theory, and is it realistic in the sense of the referenced paper?

Ones that follow the parameters described in the referenced paper are excluded by experiment. Others that are also non-local are not.
 
  • #72
stevendaryl said:
There is no real distinction between one variable versus two or 100. You can always lump them altogether into a single variable. I don't see how it would make any difference.
If one of the variables is pre-existing and the other is random until measured how can they they be lumped together?
 
  • Like
Likes edguy99
  • #73
Jilang said:
If one of the variables is pre-existing and the other is random until measured how can they they be lumped together?

The whole idea of hidden variables is to explain EPR in terms of purely local interactions. So if it's purely local, then Alice's result depends on \lambda, which is state information that she shares with Bob, \alpha, which is Alice's choice of settings, and possibly \lambda_A, which is some other variable local to Alice (maybe it describes the details of Alice's detector). Similarly, Bob's result depends on \lambda, \beta, which is Bob's choice of settings, and possibly \lambda_B, some details about Bob's local situation.

So what I think you're suggesting is that \lambda_A and \lambda_B might be random, determined at the moment that Alice and Bob, respectively, perform their measurements. I'm pretty sure that can't possibly make any difference, unless you somehow say that the settings of \lambda_A and \lambda_B are correlated, which would just push the problem back to how their correlations are enforced.

In any case, the perfect anti-correlations of EPR imply that \lambda_A and \lambda_B can have no effect in a local model.

Let P_A(A | \alpha, \lambda, \lambda_A) be the probability that Alice gets measurement result A (\pm 1) given shared hidden variable \lambda, setting \alpha and local random variable \lambda_A. Similarly, let P_B(B | \beta, \lambda, \lambda_B) be the probability that Bob gets B given his setting \beta, the value of the shared hidden variable, \lambda and his local random variable \lambda_B.

The anti-correlated EPR probabilities tells us that if \alpha = \beta, then there is no possibility of Alice and Bob getting the same result. What that means is that for all possible values of \lambda, the product

P_A(A | \alpha, \lambda, \lambda_A) P_B(A | \alpha, \lambda, \lambda_B) = 0

This means that if P_A(A | \alpha, \lambda, \lambda_A) \neq 0, then P_B(A |\alpha, \lambda, \lambda_B) = 0. Since there are only two possible results for Bob, if one of the results has probability 0, then the other has probability 1. So we conclude:

This means that if P_A(A | \alpha, \lambda, \lambda_A) \neq 0, then P_B(A |\alpha, \lambda, \lambda_B) = 0. Since there are only two possible results for Bob, if one of the results has probability 0, then the other (-A) has probability 1. So we conclude:

If P_A(A | \alpha, \lambda, \lambda_A) \neq 0, then P_B(-A |\alpha, \lambda, \lambda_B) = 1.

But it's also true that P_A(-A | \alpha, \lambda, \lambda_A) P_B(-A | \alpha, \lambda, \lambda_B) = 0, so if P_B(-A | \alpha, \lambda, \lambda_B) = 1, then P_A(-A | \alpha, \lambda, \lambda_A) = 0 and so P_A(+A | \alpha, \lambda, \lambda_A) = 1. So we have:

If P_A(A |\alpha, \lambda, \lambda_A) \neq 0 then P_A(A | \alpha, \lambda, \lambda_A) = 1

Similarly,

If P_B(B |\alpha, \lambda, \lambda_B) \neq 0 then P_B(B | \alpha, \lambda, \lambda_B) = 1

That means that the probabilities for Alice's possible results are either 0 or 1, and similarly for Bob. That means that Alice's result is actually a deterministic function of the parameters \alpha, \lambda, \lambda_A, and similarly, Bob's result is a a deterministic function of \beta, \lambda, \lambda_B. So there are two functions, F_A(\alpha, \lambda, \lambda_A) which returns +1 or -1, giving the result of Alice's measurement, and a second function, F_B(\alpha, \lambda, \lambda_B) giving the result of Bob's measurement.

Now, again, perfect anti-correlation means that if \alpha = \beta, then F_A(\alpha, \lambda, \lambda_A) = - F_B(\alpha, \lambda, \lambda_B). That has to be true for all values of \lambda_A. That means that F_A(\alpha, \lambda, \lambda_A) doesn't actually depend on \lambda_A, and similarly F_B(\beta, \lambda, \lambda_B) doesn't actually depend on \lambda_B. So extra hidden variables, if they are local and uncorrelated, have to be irrelevant.
 
  • Like
Likes zonde
  • #74
I quite agree. However it is not the perfect anti-correlation that causes the issues, is it?
 
  • #75
Jilang said:
I quite agree. However it is not the perfect anti-correlation that causes the issues, is it?

Well, you don't have to have perfect anticorrelations in order to violate Bell's inequality. I'm just saying that in the case of perfect anticorrelations, you may as well assume that the output is a deterministic function of the setting and the hidden variable.
 
  • #76
jeremyfiennes said:
Quantum variables, where there is only a probability of getting a given result, are therefore non-realistic.
If my interpretation is correct, quantum properties that affect the probability distribution of outcomes are undefined before the outcomes occur. If you could flip a quantum coin, while it is spinning through the air both faces would be blank; neither heads nor tails.
 
  • #77
jeremyfiennes said:
The thread I wanted to post my question on got closed. Recapitulating:

The best (simplest) account I have found to date for the Bell inequality (SPOT stands for Single Photon Orientation Tester):
Imagine that each random sequence that comes out of the SPOT detectors is a coded message. When both SPOT detectors are aligned, these messages are exactly the same. When the detectors are misaligned, "errors" are generated and the sequences contain a certain number of mismatches. How these "errors" might be generated is the gist of this proof. Step One: Start by aligning both SPOT detectors. No errors are observed. Step Two: Tilt the A detector till errors reach 25%. This occurs at a mutual misalignment of 30 degrees. Step Three: Return A detector to its original position (100% match). Now tilt the B detector in the opposite direction till errors reach 25%. This occurs at a mutual misalignment of -30 degrees. Step Four: Return B detector to its original position (100% match). Now tilt detector A by +30 degrees and detector B by -30 degrees so that the combined angle between them is 60 degrees. What is now the expected mismatch between the two binary code sequences? We assume, following John Bell's lead, that REALITY IS LOCAL. Assuming a local reality means that, for each A photon, whatever hidden mechanism determines the output of Miss A's SPOT detector, the operation of that mechanism cannot depend on the setting of Mr B's distant detector. In other words, in a local world, any changes that occur in Miss A's coded message when she rotates her SPOT detector are caused by her actions alone. And the same goes for Mr B. The locality assumption means that any changes that appear in the coded sequence B when Mr B rotates his SPOT detector are caused only by his actions and have nothing to do with how Miss A decided to rotate her SPOT detector. So with this restriction in place (the assumption that reality is local), let's calculate the expected mismatch at 60 degrees. Starting with two completely identical binary messages, if A's 30 degree turn introduces a 25% mismatch and B's 30 degree turn introduces a 25% mismatch, then the total mismatch (when both are turned) can be at most 50%. In fact the mismatch should be less than 50% because if the two errors happen to occur on the same photon, a mismatch is converted to a match. Thus, simple arithmetic and the assumption that Reality is Local leads one to confidently predict that the code mismatch at 60 degrees must be less than 50%. However both theory and experiment show that the mismatch at 60 degrees is 75%. The code mismatch is 25% greater than can be accounted for by independent error generation in each detector. Therefore the locality assumption is false. Reality must be non-local.


Great. Finally an explanation of Bell's theorem that even I can understand! My question relates to the following part: "Imagine that each random sequence that comes out of the SPOT detectors is a coded message. When both SPOT detectors are aligned, these messages are exactly the same. When the detectors are misaligned, "errors" are generated and the sequences contain a certain number of mismatches." A "mismatch" however would be a mismatch with respect to the code emitted by the other detector, implying a communication between the two. Does not this violate their independence?
Thanks.
Where did you get the statement (e.g. in your Step One) that tilting one detector so as to reach 25% errors occurs at 30 degrees?
 
  • #78
ljagerman said:
Where did you get the statement (e.g. in your Step One) that tilting one detector so as to reach 25% errors occurs at 30 degrees?
In this toy model it's arbitrary what the mismatch at any angle is. The point being made is that the mismatch when both both detectors are tilted should not exceed twice the mismatch when one detector is tilted, no matter what it is.
 
  • #79
ljagerman said:
Where did you get the statement (e.g. in your Step One) that tilting one detector so as to reach 25% errors occurs at 30 degrees?
This toy model is parallel with entangled photons Bell experiment. So it's prediction of QM for entangled photons. Mismatch rate changes as ##\sin(\alpha-\beta)^2##.
For more details you can take a look at this paper: https://arxiv.org/abs/quant-ph/0205171
 
  • #80
DrChinese said:
You may find this hard to accept, but all theories featuring non-local elements are not the same. Just saying it is non-local does not even come close to explaining Bell Inequality or Leggett Inequality violations. They might, but it really depends on the nature of the non-locality, don't you think?
Indeed I do. I believe the type of non-locality of BM (which does not allow my quikfone #40) is different than Leggett's.
DrChinese said:
The point is: what are the parameters of your theory, and is it realistic in the sense of the referenced paper?
I think(?) I see what you mean. The existence of my quikfone doesn't provide a consistent theory to duplicate quantum correlations.
If Alice and Bob know the entangled state of the photons and know what settings to employ then they can use the quikfone to mimic the quantum correlations. However, they in general do not have that info and thus can not in general conspire to get the appropriate correlations.

Nevertheless, given any Bell type theorem whose conclusion can be violated by QM, then I can produce a single algorithm, using the quikfone, that will will violate all such theorems. It will not in general agree with a QM violation and will likely fall short of a consistent theory in other ways. It is realistic.
But in the case of GHZ the algorithm provides the same violation as QM (there is no leeway).
 
Last edited:
  • #81
"No physical theory of Local Hidden Variables can ever reproduce all of the predictions of Quantum Mechanics". In what essential way do the premisses/assumptions of Quantum Mechanics differ from those of Local Hidden Variable theories, in this case?
 
  • #82
jeremyfiennes said:
"No physical theory of Local Hidden Variables can ever reproduce all of the predictions of Quantum Mechanics". In what essential way do the premisses/assumptions of Quantum Mechanics differ from those of Local Hidden Variable theories, in this case?
When you say "this case", do you mean the toy model that you were using at the start of this thread, or do you mean a real Bell-type experiment done with pairs of entangled particles?
 
  • #83
Neither, but rather "the predictions of Quantum Mechanics" that the experimental results support.
 
  • #84
jeremyfiennes said:
"No physical theory of Local Hidden Variables can ever reproduce all of the predictions of Quantum Mechanics". In what essential way do the premisses/assumptions of Quantum Mechanics differ from those of Local Hidden Variable theories, in this case?
The predictions aren't "premises/assumptions", they are predicted experimental results. Quantum mechanics predicts that Bell's inequality (or related inequalities such as CHSH) will be violated under some conditions. All local hidden variable theories predict that these inequalities will not be violated.

Experiments show that the inequalities are violated, so they support the predictions of quantum mechanics. Give me a moment and I'll dig up a specific example.

Edit: here's an example: https://arxiv.org/pdf/1508.05949v1.pdf
Look at equation #1; quantum mechanics predicts that ##S## will take on values as large as ##2\sqrt{2}## while all local hidden variable theories predict that ##S## will never exceed 2.
 
Last edited:
  • #85
jeremyfiennes said:
"No physical theory of Local Hidden Variables can ever reproduce all of the predictions of Quantum Mechanics". In what essential way do the premisses/assumptions of Quantum Mechanics differ from those of Local Hidden Variable theories, in this case?
Minimal formalism of Quantum Mechanics gives only statistical predictions. I would say that QM is sophisticated phenomenological theory rather than fundamental theory.
Any LHV on the other hand is supposed to be a fundamental theory.
 
  • #86
jeremyfiennes said:
"No physical theory of Local Hidden Variables can ever reproduce all of the predictions of Quantum Mechanics". In what essential way do the premisses/assumptions of Quantum Mechanics differ from those of Local Hidden Variable theories, in this case?

Here's the way I would put it: According to a local realistic theory, the outcome of any measurement depends only on conditions local to that measurement. So if Alice is measuring the spin of one particle, and far away, Bob is measuring the spin of another particle, then Alice's result depends only on facts about her particle (and measurement equipment, and maybe other things near Alice), and Bob's result depends only on facts about his particle (and measurement equipment, etc.). Alice's result does not depend on anything happening at Bob's location, and vice-versa.

"Depend" here is not in the sense of causality, but in the sense of prediction. In a locally realistic theory, knowing something about Bob shouldn't allow you to predict anything about Alice that isn't already captured by the local situation at Alice. A violation of local realism would be if knowing something about Bob allowed you to predict Alice's measurement result, but this information cannot be deduced from anything local to Alice. EPR violates local realism, because knowing Bob's result for a spin measurement of his particle allows you to predict Alice's result, and there is nothing in the region near Alice that would allow you to make this prediction.
 
  • #87
Thanks all three. I am with you on experimental violations. My present query is however theoretical. Bell formulated his theorem before it was possible to test it experimentally. The premisses/ assumptions of LHV theories predict straight-line limits to the coincidence/angle relation, the inequalities. Whereas QM predicts an S-curve that violates those limits. This has now been confirmed by experiment. My question is: what is the essential difference between the premisses/assumptions of LHV theories and those of QM, that lead to the latter predicting an S-curve rather than a straight-line coincidence/angle relation?
 
  • #88
jeremyfiennes said:
My question is: what is the essential difference between the premisses/assumptions of LHV theories and those of QM, that lead to the latter predicting an S-curve rather than a straight-line coincidence/angle relation?
Start with equation #1 in Bell's original paper: http://www.drchinese.com/David/Bell_Compact.pdf
This equation is stating an assumption that is common to all LHV theories: that the result at either detector depends on properties of the particle arriving at that detector (which may be correlated with various properties of the particle arriving at the other detector, because they the two particles share a common origin) and the way that detector has been set up, but not on the way the other detector has been set up. Bell starts with that assumption and ends up with his inequality; and because all LHV theories share that assumption then all LHV theories must obey the inequality.

However, quantum mechanics predicts that the probability of getting a coincidence between the two detectors is a function of the angle between the two detectors: for example, in the photon polarization experiments the probability that one photon will pass and the other not pass is ##\cos^2\theta## where ##\theta## is the angle between the two detectors. Note that the state of both detectors goes into this calculation, so quantum mechanics is not making the assumption in equation #1. Furthermore, for some values of ##\theta## the ##\cos^2\theta## correlation violates the inequality; so not only does QM not require the #1 assumption, but also the #1 assumption cannot be consistent with QM.

So the key distinction between QM and the LHV theories that are precluded by Bell's theorem is the #1 assumption.
 
Last edited:
  • #89
jeremyfiennes said:
The premisses/ assumptions of LHV theories predict straight-line limits to the coincidence/angle relation, the inequalities. Whereas QM predicts an S-curve that violates those limits. This has now been confirmed by experiment. My question is: what is the essential difference between the premisses/assumptions of LHV theories and those of QM, that lead to the latter predicting an S-curve rather than a straight-line coincidence/angle relation?

This is one of many confusing points about Bell's theorem. He mentions in his discussion of the EPR experiment that one would expect a linear relationship between distant measurements in the case of a classical model, while QM predicts a nonlinear relationship. However,
  1. He doesn't (as far as I know) explain why the relationship should always be linear in the classical case.
  2. He doesn't actually use the linearity in the proof of his inequality, anyway.
Fact #2 means that you can just forget about linearity and you don't really miss anything. But his remark about linearity is a little mysterious.

You can prove linearity for a very specific toy hidden-variables model. The toy model is this:
  • Associated with each twin-pair of anti-correlated spin-1/2 particles is a unit vector \vec{\lambda}.
  • The value of \vec{\lambda} is a random unit vector, with equal probability density for pointing in any direction.
  • When Alice measures the component of the spin of one particle along axis \vec{\alpha}, she gets +\frac{\hbar}{2} if \vec{\lambda} \cdot \vec{\alpha} > 0, and she gets -\frac{\hbar}{2} if \vec{\lambda} \cdot \vec{\alpha} < 0.
  • Bob measuring the component of spin of the other particle along axis \vec{\beta} gets the opposite of Alice: -\frac{\hbar}{2} if \vec{\lambda} \cdot \vec{\beta} > 0, and +\frac{\hbar}{2} if \vec{\lambda} \cdot \vec{\beta} < 0
You can prove, under these assumptions, that if the angle between Alice's axis, \vec{\alpha} and Bob's axis, \beta is \theta, then the strength of the anti-correlation decreases linearly with |\theta| from a maximum at \theta = 0.

I think that the linearity is more general, though. But I don't know the mathematical argument.
 
  • #90
jeremyfiennes said:
My question is: what is the essential difference between the premisses/assumptions of LHV theories and those of QM, that lead to the latter predicting an S-curve rather than a straight-line coincidence/angle relation?

Your question is actually backwards. Your question should be: what is the essential difference between the premises/assumptions of LHV theories and those of QM, that lead to the former predicting a straight-line rather than a S-curve coincidence/angle relation?

QM predicts the "S-curve" relationship due to specific theoretical considerations (which I will not go into). There is no specific LHV theory which predicts a the straight line relation because it has been known for over 200 years that is incorrect as compared to observation (Malus, ca. 1809). The reason the straight line relation is even brought up is that it would MINIMIZE the delta to the QM prediction (and observation), and it give the same answers at certain key settings. And it would in fact satisfy Bell. It is the simplest too. But it bears no connection to reality and would not even be discussed except in relation to Bell.
 
Last edited:
  • Like
Likes zonde and RockyMarciano
  • #91
DrChinese said:
Your question is actually backwards. Your questions should be: what is the essential difference between the premises/assumptions of LHV theories and those of QM, that lead to the former predicting a straight-line rather than a S-curve coincidence/angle relation?

QM predicts the "S-curve" relationship due to specific theoretical considerations (which I will not go into). There is no specific LHV theory which predicts a the straight line relation because it has been known for over 200 years that is incorrect as compared to observation (Malus, ca. 1809).

I think that's a little bit misleading. Malus' equation is about sequential operations on a single beam of light---send it through a polarizing filter at this orientation, then send it through a filter at that orientation. But Bell's remarks about linear relationships is about correlations between distant measurements. It happens to be true that for the twin-photon EPR experiment, the correlations between measurements performed on correlated pairs of photons is described by Malus' equation, as well, but that prediction was certainly not made 200 years ago. They didn't know how to produce entangled photon pairs 200 years ago, did they?
 
  • Like
Likes zonde
  • #92
I found an en.wikipedia quote that sums up nicely my doubt:
"All Bell inequalities describe experiments in which the predicted result assuming entanglement differs from that following from local realism."
What exactly does "assuming entanglement" here involve, in everyday terms?
 
  • #93
jeremyfiennes said:
I found an en.wikipedia quote that sums up nicely my doubt:
"All Bell inequalities describe experiments in which the predicted result assuming entanglement differs from that following from local realism."
What exactly does "assuming entanglement" here involve, in everyday terms?
To be sure, you'd have to ask the author of that quote (although it appears elsewhere on the internet, so there is some possibility that whoever added it to wikipedia was copying and pasting without complete understanding).

However, it seems likely that they're trying to say that the situations in which the quantum mechanical predictions will be different from the predictions of a theory that agrees with equation #1 in Bell's paper (which is to say, any LHV theory) will be the situations that involve entanglement. Thus, any experiment that will go one way if QM is right and another way if there is a valid LHV theory will involve entanglement.
 
  • #94
jeremyfiennes said:
What exactly does "assuming entanglement" here involve, in everyday terms?

I would adopt Nugatory's interpretation here with the proviso that, strictly speaking, if one is only interested in violating the inequality then entanglement is not actually necessary.

That seems like it runs counter to accepted wisdom, but I believe it's important to understand because it highlights the essential features of QM from which the possibility of violation emerges.

If we look at the maths of Bell's proof there's a very critical step which is the locality assumption. In the maths it's the bit where ##P(A| \alpha , \beta , \lambda )## gets written as ##P(A| \alpha , \lambda )##. Here ##\alpha## and ##\beta## are the settings of the detectors, ##A## is the result at Alice's detector and ##\lambda## stands for the hidden variables. So we're making the assumption that the probability of getting a certain result at Alice, conditioned upon the device settings and the hidden variables, does not depend upon the setting of the remote device.

There's no requirement that the devices of Alice and Bob are spacelike separated - it's irrelevant for the proof of the inequality. The ansatz that probabilities of results 'here' are not affected by settings 'there', the locality assumption, is assumed to hold whether or not the devices are spacelike separated.

Now it's possible that there is some unknown, and strange, mechanism that allows the device 'here' to know about the settings 'there' - some unknown field that carries the information about remote settings whatever experiment we set up and for whatever measurement device. In this case we couldn't make our ansatz because the existence of something like this field would affect the probabilities.

The importance of the spacelike separation step is to force any information about remote settings to have to be transmitted FTL. Now it becomes a very big deal. Before this step we could, conceivably, have some hitherto unknown weird and wonderful physics going on that allows the probabilities to be affected. With this spacelike separation step this hypothesized new physics would have to violate the principles of relativity.

So what about entanglement? Well if we ditch the requirement for spacelike separated measurements then it's possible to observe a Bell inequality violation with single, non-entangled, particles. The violation occurs in this instance between the preparation statistics of Alice and the measurement statistics of Bob. I won't go into the details but suffice it to say that it's possible. What this is telling us is that the violation of the mathematical inequality is not dependent on the devices being spacelike separated (which we already knew from the maths anyway). Furthermore, it's telling us that in this case we can obtain violations even without entangled particles. So something about QM allows this violation even without considerations of entanglement.

The spacelike separation - a very critical step if you want to rule out local hidden variable theories - is the icing on the cake - but it's not the essential reason why we see a violation of the math inequality. Nor is entanglement, per se.

If you want to see violation for spacelike separated measurements, then you need entanglement.
 
  • Like
Likes Nugatory
  • #95
stevendaryl said:
I think that's a little bit misleading. Malus' equation is about sequential operations on a single beam of light---send it through a polarizing filter at this orientation, then send it through a filter at that orientation. But Bell's remarks about linear relationships is about correlations between distant measurements. It happens to be true that for the twin-photon EPR experiment, the correlations between measurements performed on correlated pairs of photons is described by Malus' equation, as well, but that prediction was certainly not made 200 years ago. They didn't know how to produce entangled photon pairs 200 years ago, did they?

Yes, all true. But what I said was not misleading, as there was never a point in time (certainly after 1809) in which the polarization we are talking about was considered "straight-line". The starting point for entanglement (I think electron entanglement was first) was always a cos function of some type. So probably since the 1940's, perhaps. was that specifically considered?
 
  • #96
Thanks all. Thinking-cap time needed.
 
Back
Top