Undergrad Looking For Help Understanding Bell's Theorem, Hidden Variables & QM

Click For Summary
Bell's Theorem explores the implications of hidden variables in quantum mechanics, particularly through light polarization experiments that demonstrate Bell's inequality. The discussion emphasizes that if hidden variables exist, they must provide simultaneous real values for multiple polarization angles, contradicting the predictions of quantum mechanics. It highlights that experiments consistently support quantum mechanics, showing correlations that align with its predictions rather than those of hidden variable theories. The conversation also touches on the statistical analysis of experimental observations and the "fair sampling loophole," which has been addressed in recent experiments. Overall, the thread seeks to clarify the foundational assumptions of Bell's Theorem and the nature of hidden variables in quantum mechanics.
  • #31
@DrChinese, just to try and clarify further.

My understanding of your statement that we don't know how often cases [1] & [8] occur either implies or allows for the possibility that they occur infrequently, which is why I am thinking we can discount them when calculating the minimum probability, but not exclude them from the possibility space.
 
Physics news on Phys.org
  • #32
Lynch101 said:
Dr.C, for want of a better word, ignored the cases where each photon had hidden variables which would ensure it passed all 3 filters or the opposite, ensured it would fail all 3.
I don't know what you're talking about. The table @DrChinese gave covers all possible combinations of results. No combinations are ignored. The case where all 3 filters are passed and the case where all 3 filters fail is included in the table.

It doesn't matter what other structure there is "underneath" that produces the results. The point is that no possible table of predetermined results can match the QM prediction.

Furthermore, this is true whether we ignore the "pass all 3" and "fail all 3" possibilities or not. Either way no possible table of results can match the QM prediction.

I think you have not grasped the actual argument @DrChinese has been describing.
 
  • Like
Likes DrChinese and vanhees71
  • #33
Lynch101 said:
If we discount them (but allow for them to occur)
What does this even mean?
 
  • #34
PeterDonis said:
I don't know what you're talking about. The table @DrChinese gave covers all possible combinations of results. No combinations are ignored. The case where all 3 filters are passed and the case where all 3 filters fail is included in the table.
I can see where the average of 0.333 comes from, but it doesn't come from the average of the entire possibility space. While cases [1] and [8] are represented in the table, when it comes to calculating the average, they are ignored (which is what I mean when I say discounted). The reason given for this is because we don't know how often they occur.

Instead, the average for each row is taken, except for cases [1] and [8].
1699872903677.png


But, when I was asking about the case where we consider each column:
1699873264210.png


you immediately saw that this didn't treat the choice of filters as statistically independent of the particle pairs and therefore implied Superdeterminism. I didn't see it myself initially, but after thinking about, it I think I understand how you arrived at that conclusion.

That prompted me to wonder why taking the average of an individual row wasn't the same principle, just applied to particle pairs.

I was thinking that, for statistically independent events, we should consider the possibility space as a whole, as in this representation - where there are 24 possible outcomes:

1699874090778.png


This got me wondering, if we don't know how often cases [1] and [8] occur, it might be possible that they occur much less frequently than the other cases. It might be the case that in a given statistical sample they don't occur at all, while in other statistical samples they do occur.

If that were the case, the space of possible outcomes would still be 24 but the minimum number of matches we would expect to see would be 6/24 = 0.25. That is arrived at when we consider the other cases, which occur more frequently.

I was thinking that when we factor in cases [1] and [8] we would expect ≥0.25 matches.
PeterDonis said:
It doesn't matter what other structure there is "underneath" that produces the results. The point is that no possible table of predetermined results can match the QM prediction.

Furthermore, this is true whether we ignore the "pass all 3" and "fail all 3" possibilities or not. Either way no possible table of results can match the QM prediction.
My thinking is that, if cases [1] * [8] ("pass all 3" and "fail all 3") occur much less frequently that the other cases i.e. there is not an equal chance of them occurring, then they would remain in the space of possibilities but (borrowing DrC's words) "you never get a rate less than" 0.25).
PeterDonis said:
I think you have not grasped the actual argument @DrChinese has been describing.
I can see where the average of 0.333 comes from. My questions might be more to do with the application and interpretation of the statistics in general.
 
Last edited:
  • #35
PeterDonis said:
What does this even mean?
What I mean by that is, it's always a possibility that cases [1] & [8] occur, so we have to include them in our space of possible outcomes. However, because they are not as likely to occur as the other cases (with a possibility they don't occur in a given statistical sample), we don't count them when calculating the minimum expectation value (of matching outcomes).

Where they occur, they could give us a value ≥0.25.
 
  • #36
Lynch101 said:
@DrChinese, just to try and clarify further.

My understanding of your statement that we don't know how often cases [1] & [8] occur either implies or allows for the possibility that they occur infrequently, which is why I am thinking we can discount them when calculating the minimum probability, but not exclude them from the possibility space.

Lynch101 said:
What I mean by that is, it's always a possibility that cases [1] & [8] occur, so we have to include them in our space of possible outcomes. However, because they are not as likely to occur as the other cases (with a possibility they don't occur in a given statistical sample), we don't count them when calculating the minimum expectation value (of matching outcomes).

Where they occur, they could give us a value ≥0.25.

Here are some relevant rules for calculating any statistical situations:

a) If you include the cases in the denominator, you must also include the associated data in the numerator.
b) You are free to change the weighting of the cases according to your own viewpoint or rationale. For example, while I might suggest all 8 cases are equally likely, you might reasonably argue that cases [1] and [8] are rare - so you weight them as 0%. You might even say case [2] is twice as likely as case [3]. Fine. However, that still leads us to exactly the same minimum of .333 - and not .250. There are no cases with a match % less than .333.

Next, consider this: you can even make up your own results for as many trials as you like, picking whatever cases you like. And unless you pick those cases knowing in advance which 2 measurement settings (of 3) you plan to use, you will never have an average less than .333.
 
  • #37
Lynch101 said:
I can see where the average of 0.333 comes from, but it doesn't come from the average of the entire possibility space.
The average if you include rows 1 and 8 is even higher.

Lynch101 said:
What I mean by that is, it's always a possibility that cases [1] & [8] occur, so we have to include them in our space of possible outcomes.
Ok.

Lynch101 said:
However, because they are not as likely to occur as the other cases (with a possibility they don't occur in a given statistical sample), we don't count them when calculating the minimum expectation value (of matching outcomes).
Nonsense. Doing statistics doesn't mean ignoring things that are less likely. At most, it means weighting the different possibilities by likelihood. You can't just make up your own statistical methods.

Even leaving all that aside, as noted above, the average if you include rows 1 and 8, at any statistical weight whatever, is even higher than 0.333. So there is still no way to get an average of 0.25. So even if your objective is to calculate the minimum expectation value for the table, that would not imply ignoring rows 1 and 8 because they are less likely. It would at most imply ignoring rows 1 and 8 because they would just increase the average.
 
  • Like
Likes vanhees71 and Lynch101
  • #38
DrChinese said:
Here are some relevant rules for calculating any statistical situations:

a) If you include the cases in the denominator, you must also include the associated data in the numerator.
b) You are free to change the weighting of the cases according to your own viewpoint or rationale. For example, while I might suggest all 8 cases are equally likely, you might reasonably argue that cases [1] and [8] are rare - so you weight them as 0%. You might even say case [2] is twice as likely as case [3]. Fine. However, that still leads us to exactly the same minimum of .333 - and not .250. There are no cases with a match % less than .333.

Next, consider this: you can even make up your own results for as many trials as you like, picking whatever cases you like. And unless you pick those cases knowing in advance which 2 measurement settings (of 3) you plan to use, you will never have an average less than .333.
Cheers for this clarification. I was toying with the idea of weighting, but didn't have time to work through permutations.

I misinterpreted what you said in the explanation on your website, when you said about cases [1] and [8] that we don't know how often they occur which gave rise to the line of thinking of the past few posts.

Thinking through it further, obviously the ratio of matches to mismatches (without cases [1] & [8]) is 1:2 and given there must be some outcome of each trial, if cases [1] & [8] never occur the ratio of matches to mismatches for the statistical sample will have to be 1:2 (minimum number of matches being 1/3).Presumably it's not possible that instead of cases [1] & [8] being the case:
Photon 1: A+B+C+
Photon 2: A+B+C+

It was
Photon 1: A+B+C+
Photon 2: A-B-C-

Presumably there's either some experiment which has ruled this out, or it would it make any difference?

I've tried working it out on a spreadsheet, but as you may understand, I'm slow to trust my own analysis when it comes to statistics.
 
  • #39
PeterDonis said:
The average if you include rows 1 and 8 is even higher.
Yes, I was misinterpreting DrC's statement about not knowing how often cases [1] and [8] occur, but I've played around with it. I tried imagining a roulette wheel and weighting the different outcomes.

PeterDonis said:
Nonsense. Doing statistics doesn't mean ignoring things that are less likely. At most, it means weighting the different possibilities by likelihood. You can't just make up your own statistical methods.
Hahaha, I wouldn't dream of it. Toying around with weighting helped me to grasp it.

PeterDonis said:
Even leaving all that aside, as noted above, the average if you include rows 1 and 8, at any statistical weight whatever, is even higher than 0.333. So there is still no way to get an average of 0.25. So even if your objective is to calculate the minimum expectation value for the table, that would not imply ignoring rows 1 and 8 because they are less likely. It would at most imply ignoring rows 1 and 8 because they would just increase the average.
Thanks for taking the time to explain it.
 
  • #40
Lynch101 said:
Presumably it's not possible that instead of cases [1] & [8] being the case:
Photon 1: A+B+C+
Photon 2: A+B+C+

It was
Photon 1: A+B+C+
Photon 2: A-B-C-
There is no "Photon 2" in the scenario @DrChinese described.
 
  • #41
PeterDonis said:
There is no "Photon 2" in the scenario @DrChinese described.
 
  • #42
Ah, that would explain a lot. 🤦‍♂️

I was thinking in terms of pairs of photons being sent to separate detectors . I thought there was an assumption that each photon has the same properties.
 
  • #43
Lynch101 said:
I was thinking in terms of pairs of photons being sent to separate detectors . I thought there was an assumption that each photon has the same properties.
In the scenario @DrChinese described, a local hidden variable model is being tested. In that model, each individual photon has pre-determined values for all three measurements, A, B, and C. The model is tested by making measurements on a large number of identically prepared photons. But in each individual measurement, only one photon is measured.
 
  • Like
Likes vanhees71 and Lynch101
  • #44
PeterDonis said:
In the scenario @DrChinese described, a local hidden variable model is being tested. In that model, each individual photon has pre-determined values for all three measurements, A, B, and C. The model is tested by making measurements on a large number of identically prepared photons. But in each individual measurement, only one photon is measured.
Thanks PD. I re-visited the statistics without re-reading the whole thing and had it in my head that it was photon pairs.

Thanks for taking the time to address my confused posts in this thread!
 
  • #45
PeterDonis said:
In the scenario @DrChinese described, a local hidden variable model is being tested. In that model, each individual photon has pre-determined values for all three measurements, A, B, and C. The model is tested by making measurements on a large number of identically prepared photons. But in each individual measurement, only one photon is measured.
Can that experimental set-up be used to test Bell's inequality?

I may again have misinterpreted DrC's explanation when he said
d. Bell anticipated that this result sounded good in theory, but needed more to make sense - because the above conclusion could not be tested......Using [...] entangled particles, it would be possible to measure 2 of the 3 settings mentioned above simultaneously, thus providing a way to differentiate between QM and Hidden Variables experimentally using Bell's Inequality

I may have been confusing it with this explanation, which is where the assumption of photon pairs may have come from:
DrPhysicsA - Bell's Inequality
 
  • #46
Lynch101 said:
Can that experimental set-up be used to test Bell's inequality?
Not directly since only one photon is involved, not an entangled pair. But it can serve as a test of the QM prediction, which is for an average correlation of 0.25, and which, if enough runs are done, can be distinguished from the prediction of a local hidden variable model.
 
  • #47
PeterDonis said:
Not directly since only one photon is involved, not an entangled pair. But it can serve as a test of the QM prediction, which is for an average correlation of 0.25, and which, if enough runs are done, can be distinguished from the prediction of a local hidden variable model.
Ah cool. I was thinking that.

Cheers for the clarification.
 
  • #48
PeterDonis said:
There is no "Photon 2" in the scenario @DrChinese described.
Sorry, just going back to this. In tests of Bell's inequality, where entangled pairs of photons are used, would it be be possible that in cases [1] & [8] the two photons would have the following HVs:

Photon 1: A+B+C+
Photon 2: A-B-C-

or vice versa?
 
  • #49
Lynch101 said:
In tests of Bell's inequality...
The table of possible values would be different, since there are two photons, not one. There would be 64 possible cases, not 8.
 
  • #50
PeterDonis said:
The table of possible values would be different, since there are two photons, not one. There would be 64 possible cases, not 8.
Would that be the case even if the photons were always identical except in cases [1] & [8]?
 
  • #51
Lynch101 said:
Would that be the case even if the photons were always identical except in cases [1] & [8]?
What does this even mean?
 
  • #52
PeterDonis said:
The table of possible values would be different, since there are two photons, not one. There would be 64 possible cases, not 8.
Apparently you didn't grasp what this means. The table is not QM; it is a proposed local hidden variable model in which each photon carries with it a set of predetermined results for all three of the possible measurements, A, B, and C. The 8 rows in the table for the one-photon case are the 8 possible sets of predetermined results. For two photons, there are 64 possible sets of predetermined results (8 for each photon, so 8 x 8 = 64 total).
 
  • #53
Lynch101 said:
Sorry, just going back to this. In tests of Bell's inequality, where entangled pairs of photons are used, would it be be possible that in cases [1] & [8] the two photons would have the following HVs:

Photon 1: A+B+C+
Photon 2: A-B-C-

or vice versa?
Possible in some hypothetical hidden variable theory? Yes, but do remember that we can't measure these hypothetical hidden variables (if we could they wouldn't be "hidden"). All we have is that the measurements that we can make, and these say that:
1) When the detectors at both sides are set at the same angle we get the opposite results (+ at one detector, - at the other) every single time, probability 100%, no exceptions.
2) When the detectors are set at different angles, we get opposite results with probability ##\cos^2\theta##, where ##\theta## is the angle between the detector settings (you will see different formulations of this depending on whether we're working with photons polarized on the same or different axes or spin-1/2 particles in the singlet state, and whether the entanglement is such that opposite results are found on the same axis or perpendicular axes).

So if these A+B+C+/A-B-C- pairs are happening, then other pairs must also generated more or less often so that after we've measured many pairs randomly distributed across all the possible configurations of the hidden variables we end up with results agreeing with #1 and #2 above. Bell’s theorem says that you won’t be able to construct such a distribution.

You may find thus old Scientific American article helpful: https://static.scientificamerican.com/sciam/assets/media/pdf/197911_0158.pdf as it directly addresses the behavior of hypothetical hidden variables of the form ##A\pm{B}\pm{C}\pm##.
 
  • #54
PeterDonis said:
Apparently you didn't grasp what this means. The table is not QM; it is a proposed local hidden variable model in which each photon carries with it a set of predetermined results for all three of the possible measurements, A, B, and C. The 8 rows in the table for the one-photon case are the 8 possible sets of predetermined results. For two photons, there are 64 possible sets of predetermined results (8 for each photon, so 8 x 8 = 64 total).
I know it's a proposed local HV model, but to test it in the lab pairs of photons are required, no?

If both photons always have the same HVs there will only be 8 cases, not 64.

I was wondering if those two "special" cases - where both photons have the "pass all 3 filters" HVs or the "do not pass all 3" HVs - could instead have one photon with the "pass all 3 filters" HVs while the other one would have the "do not pass all 3".
 
  • #55
Lynch101 said:
If both photons always have the same HVs there will only be 8 cases, not 64.
The hidden variables carry enough information to calculate the outcome for each of six (two sides, three possible measurements at each side) possible +/- measurements. That is, the hidden variables carry enough information to answer 64 questions of the form "If this photon arrives at detector X when it is set to angle Y will the result be Z?" where X is left or right, Y is one of the three angles and Z is plus or minus. Whether the hidden variables are the same at both sides (that is, the answer to any two questions that differ only in the choice of X will be the same) doesn't reduce the number of questions, it just changes how we've encoded the hidden variables that we use to calculate the answer.
 
  • Like
Likes Lynch101 and PeterDonis
  • #56
Lynch101 said:
If both photons always have the same HVs
Why would you think that?
 
  • #57
Nugatory said:
Possible in some hypothetical hidden variable theory? Yes, but do remember that we can't measure these hypothetical hidden variables (if we could they wouldn't be "hidden"). All we have is that the measurements that we can make, and these say that:
1) When the detectors at both sides are set at the same angle we get the opposite results (+ at one detector, - at the other) every single time, probability 100%, no exceptions.
Ah yes. I was confusing myself on this. I had it in mind that it was the opposite case with photons, that when measured at the same angle they would be perfectly correlated (as opposed to anti-correlated). I was confusing that with the idea that once a photon passes through a filter at a given polarisation, it remains polarised at that angle.

Nugatory said:
2) When the detectors are set at different angles, we get opposite results with probability ##\cos^2\theta##, where ##\theta## is the angle between the detector settings (you will see different formulations of this depending on whether we're working with photons polarized on the same or different axes or spin-1/2 particles in the singlet state, and whether the entanglement is such that opposite results are found on the same axis or perpendicular axes).
That part I remember.

Nugatory said:
So if these A+B+C+/A-B-C- pairs are happening, then other pairs must also generated more or less often so that after we've measured many pairs randomly distributed across all the possible configurations of the hidden variables we end up with results agreeing with #1 and #2 above. Bell’s theorem says that you won’t be able to construct such a distribution.

You may find thus old Scientific American article helpful: https://static.scientificamerican.com/sciam/assets/media/pdf/197911_0158.pdf as it directly addresses the behavior of hypothetical hidden variables of the form ##A\pm{B}\pm{C}\pm##.
Thanks for the explanation and the article. Both have been very helpful.
 
  • #58
Nugatory said:
The hidden variables carry enough information to calculate the outcome for each of six (two sides, three possible measurements at each side) possible +/- measurements. That is, the hidden variables carry enough information to answer 64 questions of the form "If this photon arrives at detector X when it is set to angle Y will the result be Z?" where X is left or right, Y is one of the three angles and Z is plus or minus. Whether the hidden variables are the same at both sides (that is, the answer to any two questions that differ only in the choice of X will be the same) doesn't reduce the number of questions, it just changes how we've encoded the hidden variables that we use to calculate the answer.
Apologies, I misread what PD meant by that, I thought he meant 64 combinations of particle pairs, as opposed to possible outcomes.

I was ignoring the 3 combinations where the filter settings were the same - is that the case when it comes to Bell tests? Also with my confusion about both photons having the same HVs, I was ignoring the opposite filter pairings i.e. I was only looking at AB and not BA (because I thought it wouldn't make a difference to the probability).

I don't suppose you can see at a glance, where I've gone wrong with the table below?

It only has 48 possible outcomes (excluding combinations where both filters are the same e.g. AA). It also has a 50% match rate, so I must be doing something wrong 🤦‍♂️

Including the cases where both filters are the same would bring it down to 33.33% but it would have 62 possible outcomes, instead of 64.
1700004548614.png
 
Last edited:
  • #59
PeterDonis said:
Why would you think that?
The standard reason, I was confusing myself!

I had it in mind that there was perfect correlations with photons when the filters were set to the same orientation on both sides of the experiment. I think I had conflated that with the idea that once a photon passes through a filter at a given polarisation, it remains polarised at that angle.
 
  • #60
Lynch101 said:
I had it in mind that there was perfect correlations with photons when the filters were set to the same orientation on both sides of the experiment.
There could be, it depends on how we generate our entangled pairs (google for "spontaneous parametric down-conversion" ). Some processes will produce two entangled photons that will both pass filters oriented in the same direction; others will produce entangled photons that will both pass filters perpendicular to one another.

And do note that I worded it as "will both pass filters", not "are both polarized in the same/orthogonal direction". The difference matters.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 49 ·
2
Replies
49
Views
5K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 55 ·
2
Replies
55
Views
8K
  • · Replies 71 ·
3
Replies
71
Views
6K
  • · Replies 4 ·
Replies
4
Views
2K
Replies
11
Views
2K
  • · Replies 28 ·
Replies
28
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 66 ·
3
Replies
66
Views
7K