B A question about Bell's Inequality and hidden variables

  • #51
rede96 said:
... may also be influenced in a different way by the magnetic field ...

As PeterDonis pointed out: it's all wrapped up into one, a single outcome. Remember: you get perfect correlations at the same angle, so the measurement device doesn't add a degree of randomness separately (as compared to the other measuring device). If they add randomness, it is the same randomness.
 
Physics news on Phys.org
  • #52
PeterDonis said:
No, you can't. "The actual process of measurement has an influence over the result" is a mechanism, and, as you and I both agree per my previous post, Bell's Theorem has nothing to do with mechanisms. It's just about correlations between outcomes. That's it.

DrChinese said:
As PeterDonis pointed out: it's all wrapped up into one, a single outcome. Remember: you get perfect correlations at the same angle, so the measurement device doesn't add a degree of randomness separately (as compared to the other measuring device). If they add randomness, it is the same randomness.

When I mention measurement I wasn't referring to measurement errors or differences between various apparatus. I meant the actual process of a photon going through a polariser or an electron going through a magnetic field actually changes the particle is some way so as to produce the result. And it could be different for each particle.

If this was the case, then you can model almost any correlations you want just by playing with the variables / probabilities. So I can artificially model 0 deg, 120 deg and 240 deg and produce the expected matches seen by experiment. I'm not sure it means anything in the real world, but I have a few excel spread sheets where I've done this. But I haven't been able to figure out how to do it for all combinations of angles without making a new set of assumptions each time.

So it's still nonsense I know, but it did make wonder if in theory it could be a possibility.
 
  • #53
rede96 said:
And it could be different for each particle.

But it couldn't! Else how would the results match exactly at the SAME angle settings??

You see, that's the thing about replicating the quantum predictions using a local realistic hypothesis. On the one hand, to match EPR, the Alice's result must lead to a perfect prediction of Bob's (on the same measurement basis).

Of course, your idea falls short even without the EPR-like side, but you will never see that as long as you skip the step of actually attempting to match the quantum expectation values by writing out a full data set for a handful of examples. If you did that, your questions would end quickly. :smile:
 
  • #54
rede96 said:
I meant the actual process of a photon going through a polariser or an electron going through a magnetic field actually changes the particle is some way so as to produce the result. And it could be different for each particle.

It's still a mechanism, and Bell's Theorem has nothing to do with mechanisms.

rede96 said:
If this was the case, then you can model almost any correlations you want just by playing with the variables / probabilities.

But if the correlations produced by your model factorize in the way I described, then they must obey the Bell inequalities. Or, conversely, if the correlations produced by your model violate the Bell inequalties, then they can't factorize in the way I described. Again, that has nothing whatever to do with any mechanism; it's just a mathematical fact about the correlations between the outcomes, regardless of what mechanism produces them.

That means, if you want to understand what Bell's Theorem is telling you, you should stop trying to make claims about mechanisms and work on trying to understand what the factorization condition means.
 
  • #55
rede96 said:
When I mention measurement I wasn't referring to measurement errors or differences between various apparatus. I meant the actual process of a photon going through a polariser or an electron going through a magnetic field actually changes the particle is some way so as to produce the result. And it could be different for each particle.

If this was the case, then you can model almost any correlations you want just by playing with the variables / probabilities. So I can artificially model 0 deg, 120 deg and 240 deg and produce the expected matches seen by experiment. I'm not sure it means anything in the real world, but I have a few excel spread sheets where I've done this. But I haven't been able to figure out how to do it for all combinations of angles without making a new set of assumptions each time.

So it's still nonsense I know, but it did make wonder if in theory it could be a possibility.

No, you cannot. Bell explicitly considers "hidden variables" associated with the measuring apparatus also, and the possibility that the measuring apparatus interacts with the quantum system. As I said, the simpler derivations may omit this consideration, but if you do the full, most sophisticated version, you come up with the same result. The simpler version is usually given, because it gives the same answer as the more sophisticated derivations.

It is also possible to do the full derivation (ie. include measurement effects and hidden variables associated with the measuring apparatus) using just a few lines if one uses the graphical notation in Fig 19. of https://arxiv.org/abs/1208.4119.

There are loopholes, but the idea you raise is not one of them. There is a quite comprehensive discussion of loopholes in https://arxiv.org/abs/1303.2849.
 
  • #56
PeterDonis said:
It's still a mechanism, and Bell's Theorem has nothing to do with mechanisms.
But if the correlations produced by your model factorize in the way I described, then they must obey the Bell inequalities. Or, conversely, if the correlations produced by your model violate the Bell inequalties, then they can't factorize in the way I described. Again, that has nothing whatever to do with any mechanism; it's just a mathematical fact about the correlations between the outcomes, regardless of what mechanism produces them.

That means, if you want to understand what Bell's Theorem is telling you, you should stop trying to make claims about mechanisms and work on trying to understand what the factorization condition means.

Could you show/elaborate on the factorization condition and how correlations predicted by QM and outcomes violating the inequality cannot factorize :
p (ab|xy,λ) = p (a|x,λ) p(b|y,λ)
S = (ab) + (ab') + (a'b) - (a'b') ≤ 2
 
Last edited:
  • #57
morrobay said:
Could you show/elaborate on the factorization condition and how correlations predicted by QM and outcomes violating the inequality cannot factorize

Any paper giving a proof of the Bell inequalities or their equivalents (such as the CHSH inequalities, which it looks like you are quoting) will show this.
 
  • #58
DrChinese said:
Of course, your idea falls short even without the EPR-like side, but you will never see that as long as you skip the step of actually attempting to match the quantum expectation values by writing out a full data set for a handful of examples. If you did that, your questions would end quickly.

If what you are advising is to write down all the permutations (e.g. ++-, +-+...etc) and see that these always have to be >= 0.333 I get that. My issue is that as I don't fully understand how a violation of this leads to non locality.

PeterDonis said:
That means, if you want to understand what Bell's Theorem is telling you, you should stop trying to make claims about mechanisms and work on trying to understand what the factorization condition means.

I guess that's the issue, I don't understand what the factorization condition means at all or more importantly how it relates to reality other than it predicts a result we don't get in experiment. Hence why I focus on the mechanisms to help my understanding. But that doesn't mean I dismiss the math. It just means I don't understand how it relates to the real world.

So do you mind if I ask a question to help with my understanding? Hypothetically speaking imagine if there was a particle that when tested individuality through an uneven magnetic field always gave a probability of 50% for 'up' and 50% for 'down' (EDIT: So for all intents and purposes acted just like any spin 1/2 particle.)

But when the particle was entangled, it lost it's probability and one of them always pointed up and one of them only pointed down, regardless of the angle it was tested on (EDIT: e.g. they were now only attracted to the north or south pole of the magnet.)

So if I did the bell test with 3 randomly selected angles, it would still be a 50% chance of detection at each angle, as I don't know which path the particles would take.

I could model this in the same way as the Bell test and would violate bell's inequality as I'd get no matches and would expect >= 0.333. But does that also mean non locality? Couldn't I just say that one particle always pointed up in a magnetic field and one always pointed down. So this was an intrinsic property of the particle we can explain without action at a distance? Or would bell's theorem not apply in that case?
 
Last edited:
  • #59
Hi All,
sorry for my wording, english is not my language.

the Bell theorem has many possible loopholes. The most serious of them is the fair sample loophole. As Garg & Mermin showed in 86, one can reproduce QM results if he considers that some 'particles' aren't detected. This implies that an experimental strict proof of the theorem needs a detection rate at least better than 75%. Another proof would consist to invalidate the shapes detection predicted by the alternative hidden variables theories. Then, Zellinger showed the inequalities with a detection rate of 92% ( 2013/15) and quantum intrication became a definitive mainstream concept. It is now an inescapable element to dig for new physics ( LQG, ER=EPR, etc ) and philosophy.

Mermin functions exist. Some need hidden variables, other just only one shared value when the simulation starts. One of them, discovered in 2011 , uses trivial classical tunelling and doesn't need a fine tuning at all. Cos² for correlated ( or sin² for uncorrelated ) are perfectly got with a computed and simulated detection rate of 75%. This simulates perfectly what the detectors may measure locally in a 75% experiment. A physical interpretation theory might use the interferences concept and its relation to usable ( local ) information. But to get a perfect cos² the rate is limited to 75% ; thus it is falsified in advance by the 92%'s experiment.

The Mermin paper ( PHYSICAL REVIEW D VOLUME 35, NUMBER 12 15 JUNE 1987, Detector inefficiencies in the Einstein-Podolsky-Rosen experiment by Anupam Garg & N. D. Mermin 1986 ) is not public. This reduces its real impact.

As for today, numerical simulations may produce the nearest outcomes to QM while experiments are probably around 1 or 2% detection rate. This asks the applicability of the mainstream theory to technology.
Some note that Zellinger's experiment is debatable. Other remark that the tapping-detector quantum cryptology is not yet sold by companies. Sometimes I wonder if the quantum computing projects will not end by producing nice specialized analogic computers.

But it is impossible that all well-known physicists are mistaken. Hopefully see a technological product close the debate.
 
  • Like
Likes atyy and Mentz114
  • #60
rede96 said:
My issue is that as I don't fully understand how a violation of this leads to non locality.

"Non locality" is words, and not everyone necessarily agrees that they are the best words to use to describe what violating the Bell inequalities means. But see below.

rede96 said:
I don't understand what the factorization condition means

Heuristically, it goes like this: we have some probability, correlation, whatever you want to call it, between two measurement results, call them ##a## and ##b##, which are obtained at spacelike separated events. Call the thingie we're interested in (probability, correlation, whatever) ##E(a, b)##. We have settings for the two measuring devices, call them ##A## and ##B##. So the most general function we can write down to describe how the thingie we're interested in depends on the measurement settings (ignoring any other stuff like "hidden variables"--in all the papers those are integrated over anyway so you always end up with formulas that look like the ones I'm about to write) is ##E(a, b) = F(A, B)##, i.e., the thingie were interested in is described by some function ##F## whose arguments are the settings ##A## and ##B##.

Now, what the factorization condition means is that the function ##F## can be factored into a function that only depends on ##A## and a function that only depends on ##B##, like this: ##E(a, b) = f(A) g(B)##. And intuitively, this seems to express what we mean by "locality": that since the measurements are spacelike separated, there can't be any communication between them, so whatever is causing the thingie we're interested in, it should break down into something that only depends on the settings at ##A## (which is what the function ##f(A)## describes) and something that only depends on the settings at ##B## (which is what the function ##g(B)## describes). And Bell and others have shown mathematically that assuming that form for ##E(a, b)##--what I'm calling the factorization condition--requires that certain inequalities must hold, which we observe to be violated in actual experiments. So, whatever is causing the thingie we're interested in, it cannot break down into something that only depends on the settings at ##A##, and something that only depends on the settings at ##B##.
 
  • #61
rede96 said:
Hypothetically speaking imagine...

You're getting sidetracked on mechanism again. But testing your mechanism is simple: just figure out whether the function ##E(a, b) = F(A, B)## that describes the results it produces can be factored the way I described in my last post, or not.

rede96 said:
But when the particle was entangled, it lost it's probability and one of them always pointed up and one of them only pointed down, regardless of the angle it was tested on

You're not describing a particle that "lost its probability"--you're just describing the standard singlet state of two qubits, whose spins are guaranteed to always be opposite if they are both tested with the same angle. And yes, this state can (obviously) produce results that violate the Bell inequalities. Which means there is no way to write down a function that describes how the results depend on the measurement settings that will factorize in the way I described in my previous post.

rede96 said:
So if I did the bell test with 3 randomly selected angles, it would still be a 50% chance of detection at each angle

A 50-50 chance for each one considered individually, yes. But you would always find that, if the angles selected were the same for both particles, their results would be opposite.

What you have not considered is what happens when the two angles are different. You will still get a 50-50 chance of each outcome for each particle considered individually, but now the correlation between the results for the two particles will be different--they won't always be opposite. But if you do a lot of trials at lots of different combinations of angles, you will find that the whole set of results is described by some function ##E(a, b)## that cannot be factorized in the way I described.
 
  • #62
Leo1233783 said:
...the Bell theorem has many possible loopholes. The most serious of them is the fair sample loophole. ... As for today, numerical simulations may produce the nearest outcomes to QM while experiments are probably around 1 or 2% detection rate.

Loopholes are a topic for another thread. :smile:

But a number of Bell tests have been done without detection issues requiring the fair sampling assumption. Here's a great one:

Experimental loophole-free violation of a Bell inequality using entangled electron spins separated by 1.3 km
https://arxiv.org/abs/1508.05949
 
  • Like
Likes zonde
  • #63
DrChinese said:
But a number of Bell tests have been done without detection issues requiring the fair sampling assumption.
I aim to work on experimental raw data. Possibly with external referees like this one : Significant-loophole-free test of Bell's theorem with entangled photons and many others. We are talking of sciences. Experiments must be rigorous and reproducible. It is the basis of any scientific education.

I have myself no opinion. I fluctuate between interpretations 3 times in the same hour.
I tried to compress the post. If you need details on anything above , please ask.
 
  • #64
PeterDonis said:
Heuristically, it goes like this: we have some probability, correlation...

Thank you for explaining that in more detail.

PeterDonis said:
So, whatever is causing the thingie we're interested in, it cannot break down into something that only depends on the settings at A, and something that only depends on the settings at B

So if I've understood all this correctly, then what this is telling me is that there is no information a particle could carry that would lead to a violation of bell's inequality because any information it held would just lead to a different outcome at A and at B independently. But no combination of the settings at A and the settings at B could lead to violation of Bell's inequality due to the way the probabilities factorise.

Therefore what is causing the thingie we are interested in can only happen after the measurement. And we can separate A and B far enough so that no information could pass between them in the time taken to measure them both respectively. Hence non locality.

Is that correct?

PeterDonis said:
And yes, this state can (obviously) produce results that violate the Bell inequalities. Which means there is no way to write down a function that describes how the results depend on the measurement settings that will factorize in the way I described in my previous post.

So here's the crux of my confusion. In my hypothesis, assuming a bell test had not been done, then the probabilities we assume for the bell test would be exactly the same as the probabilities that are used. As the individual particles act in exactly the same way as electrons for example. Also, whenever we measure the entangled states at the same angles we always see anti correlation. Which again is what we see with electrons. EDIT: Hence why we'd assume the same probabilities.

It would only be when we ran the bell test that we would discover that the probabilities wouldn't factorise to be dependant on just the settings at A and the setting at B. BUT we would still have violation of Bell's inequality that could be explained classically.

So back to the real world, when we do get a violation of Bell's inequality how we do we know that there is not something weird going on with entanglement that means maybe the probabilities we assume for entangled states being measured at different angles, are not correct? As in my imagined scenario.

EDIT: In other words how do we know that entangled particles aren't just correlated within a certain range of angles relative the north / south poles of the magnetic field but then setting of angle dependant at angles outside of that range? If that makes sense?
 
Last edited:
  • #65
rede96 said:
1. BUT we would still have violation of Bell's inequality that could be explained classically.

2. So back to the real world, when we do get a violation of Bell's inequality how we do we know that there is not something weird going on with entanglement that means maybe the probabilities we assume for entangled states being measured at different angles, are not correct? As in my imagined scenario.

EDIT: In other words how do we know that entangled particles aren't just correlated within a certain range of angles relative the north / south poles of the magnetic field but then setting of angle dependant at angles outside of that range? If that makes sense?

1. There are NO classical datasets that reproduce the quantum expectation values. As I have mentioned, you can hand pick them, and it still won't work. Just try. :smile:

2. Obviously, you can run a correlation test on entangled pairs at any 2 desired angles and verify that the quantum predictions are correct. What angles do you think the quantum predictions (cos^2 of theta) are not correct? It should be obvious that this has been tested up one side and down the other. There may have been tens of thousands of Bell tests run by now supporting the predicted quantum correlations.

Everything operates well in line with the predictions of QM. And those are clearly incompatible with local realism. Spin and polarization testing is but one area that this manifests itself. Probably over a thousand DIFFERENT tests of local realism (being falsified) as well. There have been no published reports I am aware of that indicate a FAILURE to falsify local realism - which would be big news itself if it could be replicated.
 
Last edited:
  • #66
DrChinese said:
1. There are NO classical datasets that reproduce the quantum expectation values. As I have mentioned, you can hand pick them, and it still won't work. Just try. :smile:

Well, for what it's worth I can artificially create a data set that reproduces the quantum expectation values based on classical thinking. I just make some wild assumptions about how entangled particles work.

In essence I randomly generate an angle theta to represent the angle of the component of spin (Z direction) for both entangled particles. Then each detector selects one of three agreed angles randomly (as per normal bell test.) Then if theta is within a certain (small) range of the detector angle then it's detection is always up. So I can ensure particles measured at the same angle are always correlated. There is another range of angles where detection is probabilistic, so I get a certain amount of matches just by chance at different angles. Then there is another range where if the angle of the component of spin of particle falls outside of that range relative to the detector then because of some imagined property of the spin states, one particle will always be attracted to the north pole of the magnetic field and one to the south pole. Hence artificially reducing the number of matches.

By messing around with the probabilities and range I can artificially create the right number of matches and still have both particles exactly correlated when measured at the same angle. Like I said not sure what it means, but I can do it that way.
 
  • #67
rede96 said:
Is that correct?

It's too vague to be either correct or incorrect. What is "information"? What does it mean for a particle to "carry" information?

You need to stop thinking in vague ordinary language terms. Bell formulated his theorem using math for a reason.

rede96 said:
In my hypothesis, assuming a bell test had not been done, then the probabilities we assume for the bell test would be exactly the same as the probabilities that are used.

What does this even mean?

rede96 said:
It would only be when we ran the bell test that we would discover that the probabilities wouldn't factorise to be dependant on just the settings at A and the setting at B. BUT we would still have violation of Bell's inequality that could be explained classically.

What does "classically" mean? If you are referring to the hypothetical model of yours that you keep talking about, it's too vague for me to tell whether "classically" is an appropriate adjective to describe it.

rede96 said:
when we do get a violation of Bell's inequality how we do we know that there is not something weird going on with entanglement that means maybe the probabilities we assume for entangled states being measured at different angles, are not correct?

We don't have to assume any probabilities in order to measure violations of the Bell inequalities. We measure the probabilities. We don't assume them.

You seem to have things backwards. We don't assume something about the probabilities, and then try to decide whether violations of the Bell inequalities are consistent with our assumptions. The reasoning goes like this:

When we do experiments, we find that if we take the correlations between the results of measurements on pairs of spacelike separated particles, which have been prepared in a particular way (the way that quantum mechanics calls "entangled"), those correlations violate the Bell inequalities.

Bell's Theorem says that, if the correlations violate the Bell inequalities, then no function that factorizes in the way I described can produce those correlations. (The theorem is usually stated in the contrapositive form to this, but logically the two versions are equivalent, and the version I've just stated is more relevant to this discussion.)

Therefore, whatever-it-is that is producing the correlations cannot be described by a function that factorizes in the way I described.

The reasoning above assumes nothing about the probabilities; those are measured. It assumes nothing about the "state" of the particles, or about whatever "mechanism" might or might not be at work behind the scenes to produce the observed results. The preparation process that makes the particles is an objective process, which can be replicated without making any assumptions about what it is doing other than what is directly observed. So are the measurement processes.

What you appear to be trying to do is to construct some mechanism that will produce correlations that violate the Bell inequalities, but which somehow isn't "nonlocal", by whatever vague definition of that term you are using. But, as I said above, Bell formulated his theorem using math for a reason. "Nonlocal" is a vague ordinary language term. But whether or not the function that describes the correlations factorizes is a precise mathematical question that has a precise mathematical answer.

rede96 said:
for what it's worth I can artificially create a data set that reproduces the quantum expectation values based on classical thinking.

"Classical thinking" is a vague ordinary language term. To put it bluntly, say this statement of yours quoted just above is correct. Who cares?

What you cannot do is create a data set that reproduces the experimentally measured results (which are consistent with the QM prediction), and describe it using a function that factorizes in the way I described. It's mathematically impossible: that's what Bell proved. And with that, this discussion has gone around in circles long enough. Thread closed.
 
Back
Top