Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Entanglement and Bell’s inequality Question

  1. Mar 8, 2015 #1


    User Avatar
    Gold Member

    Forgive the layman type question but I was doing some reading on Bell's inequality and how it disproves the hidden variable hypothesis in entanglement.

    The example I looked at was from YouTube

    I understand the principle of how bell's theorem works and how the tests done on polarisation of photons gave a result that was wasn't possible under Bell's inequality. But I was still sort of confused.

    In the example for Bell's inequality, the author of the video uses an example where kids can be wearing hats, scarfs and gloves to represent the different 'states' that a child can be in. Then went on to explain how 'Alice' and 'Bob', would test entangled photons (which should be polarized in the exact same way) through randomly selected polarization screens. (Sorry if my terminology is crap!)

    What I don't get is that if I imagine a mother with twins who say puts a hat on each twin, then if I test the twins to see if they wearing a wearing a hat. scarf or glove, then in my example, the results would always show 'hats' for both those twins.

    But as I understand it, if a photon is polarised in say a vertical direction, then if I test using an a certain angle of polarization, then there is only a certain probability that a photon will pass through the screen.

    So if there are a pair of entangled photons, there is the same probability that any one of them would pass through the angled polarisation. So even though they are both polarized in the same direction, as I understand it, wouldn't there still be an independent probability for each photon to pass through the angled polarization screen? So one may pass through the angled screen and one might not.

    This is different than with the twins wearing hats, as any test has to show either both are, or both are not wearing hats.

    Which is a very long winded way of saying is Bell's inequality really relevant for entanglement?

    Hope that makes sense.
  2. jcsd
  3. Mar 8, 2015 #2


    User Avatar
    Science Advisor
    Gold Member

    Ah, there are a few details to consider here.

    1. Entangled photons show "perfect correlation" when measured at the SAME angle. Measure Alice at 42 degrees and Bob can be predicted with certainty at 42 degrees.

    So no, the hats etc example is relevant.

    2. Bell Inequality HOLDS for the hat etc example. It FAILS for entangled photons. You cannot construct a comparable classical example in which the Bell Inequality fails.
  4. Mar 8, 2015 #3


    User Avatar
    Gold Member

    Ah ok, I didn't understand that, thanks. But as you said this is at the SAME angle. But from what I understand different angles have different probabilities of the photon going through the polarizer. So in the example video Alice and Bob have three polarizers which they can choose at random, one is vertical, one at 120 degrees and one at 240 degrees. For Bell's inequality to work, then if the photon can pass through say 2 polarizers, then after it passes through the first one, there has to be an equal probability that it can pass through any one of the two remaining polarizers? Is that right?

    So what if it wasn't possible, due to the polarization of the photon, for the photon to pass through both the 120 degree and 240 degree polarizers?

    If that was the case then wouldn't we only expect Alice and Bob to get the same results with a minimum probability of 0.222. Which is consistent with the experimental results of 0.25?

    If I apply my logic above (Which I know might be complete rubbish!) I could have a classical experiment where the mothers who gives out the hats, gloves and scarfs have a hidden variable, which is they will never give a child both a hat and a scarf, maybe due to some silly superstition :) In which case, if Alice and Bob randomly chose a hat, scarf or glove detector, their results would only match a >= 0.222 not >= 0.333

    Or is my thought process also rubbish :)
  5. Mar 8, 2015 #4


    User Avatar

    Staff: Mentor

    That would be an interestingly different world... But we've done the experiment, many times, and we find that the photon does indeed make it through those polarizers with some non-zero probability.

    Yes. That's a case in which Bell's inequality is not violated, and there's no problem constructing classical scenarios of that sort. The point of Bell's theorem is that quantum mechanics predicts a violation of the inequality, while no classical scenario does.
  6. Mar 8, 2015 #5


    User Avatar
    Science Advisor
    Gold Member

    Haha, you will need to decide that for yourself!

    As Nugatory already pointed out, QM predicts something which will violate the inequality. So that means you needs to run though the math of it before you can say. Likelihood of scarf but no hat, etc. per the video.

    And keep in mind that the inequality is NOT violated for ALL angle settings - only some specific ones. The reason to look at the 0/120/240 angle settings is because it is one of the easiest cases where it is violated, also nicely symmetric; and fits Mermin's example nicely to boot. Also 0/45/60 degrees or 0/45/67.5 degrees. cos^2(60 degrees)=cos^2(120 degrees)=.25 ...
  7. Mar 9, 2015 #6


    User Avatar
    Gold Member

    I think in my elementary way, that is what I was getting at. That once a photon is polarized in a certain direction (or an electron with a certain spin to use another example) then the detection of that polarisation is of a varying probability depending on what angle is used in the detection process.

    Where as with a classical situation, as with hats scarfs etc there is no varying probability for detection of a certain state. If a child is wearing a hat and we have a hat detector, then it will detect a hat 100% of the time.

    So that is why I was wondering if Bell's inequality really disproves the hidden variable explanation in Entanglement. How can using a classical tool to predict a quantum mechanical outcome be valid?

    So, lets say I was to take the hat, and glove scenario and make the detection of say hat's a varying probability depending on some other hidden variable that we aren't aware of. For example maybe pair of children are of varying height. Each pair maybe of the same height, but different pairs will have different heights. And lets assume the hat detection is height sensitive. So when they walk through the hat detector, there is a probability of detection depending on their height. Kids in a range of say heights 3ft +/- 3 inches will have a higher probability of being detected wearing hats then kids who are 3ft +/- 3 to 6 inches. As the kids walk their heads move up and down, so it can also be probabilistic depending on what part of their stride they are in.

    Without doing the math, I know if I repeat the hat, scarf and gloves exercise using a probabilistic outcome for hat detection, sometimes kids wearing hats will be detected and sometimes not, which will reduce the probability of Alice and Bob getting matching results of >= 0.333. However as the kids are still wearing hats, the prediction using 'hidden variables' does not work out, so bells inequality will be violated in certain cases.

    This to me is analogous to photon polarization detection, it depends on the angle of detection. Again without doing the math I should be able to work out which angles give the better change of violating Bell's inequality. But that's only a guess, I am not asserting anything! But will try and do the math later.

    I know because I am just a layman that I am probably missing something obvious, but for my simple logic, it doesn't necessarily show that a violation of Bell's inequality disproves hidden variables. It just says we don't know enough about it. So I can't get my head around why the correlation of paired photons can't still be determined at creation.
    Last edited: Mar 9, 2015
  8. Mar 9, 2015 #7


    User Avatar

    Staff: Mentor

    We aren't using a classical tool to predict a quantum mechanical outcome here. We're doing experiments to tell us what the outcome is, then comparing that outcome to the predictions of various theories.

    Bell's theorem is a mathematical and logical statement: If we start with the premise that there are local hidden variables (as defined in a particular way), then we can logically prove that a particular inequality must always hold. As such, it neither confirms nor denies quantum mechanics; it's just a statement about the general properties of any theory that includes local hidden variables.

    Then if some experiment shows the inequality not holding we know that one of three things is going on:
    1) The premise is incorrect; there are no local hidden variables using Bell's definition.
    2) There is an error in the the logical proof so the inequality does not in fact follow from the premise.
    3) There is an error in the experimental procedure, so the inequality is not in fact being violated.

    #2 is pretty much impossible - the proof is only a few pages long, is momma-poppa simple, and has withstood a half-century of scrutiny by some of the world's best skeptics.
    #3 is wildly implausible - many different teams using many different technologies have done many different experiments with different types of entangled particles, so they would somehow have to have made independent mistakes that all happened to yield the same wrong answer.

    So we're left with #1: The premise is incorrect.
  9. Mar 9, 2015 #8


    User Avatar
    Gold Member

    Ok, thanks. I guess I can have quite brute force attack to understanding something at times. So was not challenging Bell's theorem or any of the proof you mention. But the logic, in the way I have viewed the problem, doesn't make sense to me personally. I haven't had that "aha" moment where I understand why there can't be hidden variables.

    So will continue to read up on it.
  10. Mar 9, 2015 #9


    User Avatar

    Staff: Mentor

    You are describing the so-called "fair sampling loophole". It's not a problem for the theorem itself, because the theorem is a statement about what will be observed if we do have a fair random sample of the experimental results; as far as the theorem is concerned, there's nothing surprising about finding the inequality violated when we don't have a fair random sample of the experimental results. Although not a problem for the theorem, it is a potential problem for the experiments; an experiment that is biased by an effect such as the one that you describe might find a violation in the biased subset that it is looking at, and this would fall under #3 ("There is an error in the experimental procedure, so the inequality is not in fact being violated") of my post above.

    However, there are two reasons to trust the experiments here. First, they have been done with different types of particles entangled on very different properties (photon polarization, particle spin) and there's no single mechanism that could bias them both in ways that would create an inequality violation. In your example, a height bias might explain the anomaly for children and hats... but if we saw similar anomalies with dogs and traits such as long hair and cropped ears, we'd need to assume a whole different mechanism that somehow produced exactly the same statistical bias. That's a bit of a stretch.

    Second, several of the experiments have been run with detector efficiencies high enough that we know that we're looking at all of the results, not just a subset that happens to behave differently.
  11. Mar 9, 2015 #10


    User Avatar
    Science Advisor
    Gold Member

    You are using the example of the matching incorrectly when you reference .333. Let's translate it to your example.

    Imagine you ask: what is the likelihood of a hat and a scarf, OR no hat and no scarf. Let's call either of those a MATCH. A mismatch is a hat and no scarf, OR a scarf and no hat. Similar match/mismatch definitions for the hat/glove pair, and the scarf/glove pair.

    From ANY sample of twins you care to supply (both dressed the same), if I get to select which 2 of hat/glove/scarf you look at, the MATCH ratio will NEVER be less than .33 (given a sufficient sample size). It can go as high as 1.00.

    But... the analogous QM sample could be as low as .25. This is David Mermin's example.

    While the math seems difficult, it really isn't. Try it with 3 coins in a triangle. You will quickly see that the likelihood of any pair of coins being the same (head-head or tail-tail) CANNOT be less than .333. Same example as above. The ONLY way you can ever get the odds below .33 is if you CHEAT. That meaning: you know the outcomes in advance and select your measurements accordingly. That is called observer dependent. Note that you need to know both outcomes, 1 isn't enough for this game.
  12. Mar 16, 2015 #11


    User Avatar
    Gold Member

    Hi and thanks for the reply.

    Your explanation helped me to understand the classical case better, thanks. But I am still struggling with the same thing that was puzzling me before and that is that in the classical case, there is 50% probability of someone wearing a hat, scarf or glove BUT if someone is wearing a hat, glove or scarf, there is 100% probability it will be detected. This is not the case for detecting Photon polarisation.

    Let me try and explain my rationale:

    In the video example, we are detecting photon polarisation using 3 detectors at 3 different angles (0, 120 and 240). And simply seeing if we detect a photon or not.

    However, in the case with photon polarisation, if a photon is in a certain angular state, the probability of detection is not always 100%, it depends on the angle of polarisation of the photon in relationship to the angle of the detector.

    So for example (Using cos²θ, which I understood to be the way to work out the probability), if a photon pair is polarised in the 'Up' direction i.e. 0 degrees, then the probabilities of detection in the 3 cases mentioned are 0 degrees = 1, 120 degrees = 0.25 and 240 degrees = 0.25. And these probabilities vary depending on the angle of the photon, so the probability relationship between the 3 detectors varies too.

    Which is a bit like saying we might not detect a kid who is wearing a hat if the detector is at an angle to him.

    I know in principle it doesn't matter what the probability of someone placing a hat, scarf or glove on a kid, so it doesn't have to 50%, but the detection must be accurate or events that have been randomly generate will not be detected.

    For example, if we have a pair of twins both wearing a hat and a scarf, Bob randomly chooses the 'Hat' detector and Alice randomly chooses the 'Scarf' detector. What we should see as a MATCH in the results. But what is actually recorded is NO MATCH, because the Bob's detector didn't pick up the 'Hat'. BUT we could have the exact same situation occur again, (i.e. two photons polarised at the exact same angle as in the first case) but this time both Bob and Alice's detectors trigger and we get a MATCH.

    So my logic says this must skew the results as we are recording less MATCHES then have actually been randomly generated. So it must affect the outcome of the result and reduce the total MATCH probability? In my mind that is no different to me just crossing off a few MATCHES at the end of the test in order to reduce the outcome.

    So where am I going wrong?
  13. Mar 16, 2015 #12


    User Avatar

    Staff: Mentor

    Such missed detections do indeed skew the results; by increasing the number of mismatches, they reduce the measured size of the inequality violation relative to the actual size of the violation across the entire population of test particles. Thus, the violation of the inequality across the entire population will be larger than the measured violation - which makes experiments that find violations more convincing evidence, not less.

    It is true that this argument depends on fair sampling, as I described above - but the fair sampling loophole is pretty well closed by some experiments.

    If you haven't already seen http://en.wikipedia.org/wiki/Loopho...ents#Detection_efficiency.2C_or_fair_sampling, it's worth a look.
    Last edited: Mar 16, 2015
  14. Mar 16, 2015 #13


    User Avatar
    Gold Member

    Thanks for the link. I had a quick read but it looks a bit over my head! But will try and go through it later.

    And there lies the crux of the problem for me. I can see that the experiments prove quantum mechanics but I haven't quite grasped how they disprove hidden variables.
  15. Mar 16, 2015 #14


    User Avatar
    Science Advisor
    Gold Member

    That is not correct. Of course there are inefficiencies etc. but those can be ignored for our case, which is the IDEAL case. A beam splitter is used to determine polarization so that all photons are detected. In actual experiments there is no statistical connection between the observation angle and the detection ratio. So as long as that is true, there is nothing to talk about.

    Nugatory is completely correct but for your purposes, forget the fair sampling issue UNTIL you understand the main issue. Go back to the hat/scarf/glove example and DON'T leave it until you understand that the classical example and the quantum examples are in contradiction.

    So... the next step for you should be to frame your question in terms of the example we have been discussing. The analogy is that no hat and hat are both detected with equal efficiency and you should assume it is 100%. Once you master this example, we can move on the sampling. Remember: there are always perfect correlations when the SAME angle is measured. That should serve to remind you of the issue of hidden variables.
  16. Mar 17, 2015 #15


    User Avatar
    Science Advisor
    Gold Member

    It is true that some polarizers only allow some light through. But most Bell tests use Polarizing Beam Splitters (PBS) which separate the input stream into a H output stream and a V output stream. Detectors are placed at BOTH outputs. The PBS itself is oriented at any angle you desire. As I said previously, there is no useful connection between the likelihood of a detection as H or as V which is in any manner dependent on the PBS orientation.

    In our hat/glove/scarf example: you will see Hat/NoHat at very equal rates, as Scarf/NoScarf and Glove/NoGlove. You can hand prepare any* sample, as a matter of fact, and the Bell Inequality will *always* be respected as long as the choice of measurements I select are unknown to you in advance.

    Detection rate is not an issue in this. But even if it was, you should be able to see that: UNLESS there is a special connection between the 2 settings being chosen AND the chance of detection, the result will be a fair sample. So again, detection rate would not be an issue. If there were such a special connection, it would need to be non-local since you can switch settings very fast (such tests have been run).

    *large enough...
  17. Mar 17, 2015 #16


    User Avatar
    Gold Member

    Firstly, thanks very much to you and Nugatory for your help on this. It is very much appreciated and I am trying to get my head around it!

    So to help me to do that, in my own clumsy way, I decided to make a simulation for both the classic and quantum scenarios in a spreadsheet. I hope I have done this correctly, and have listed my method and results below.

    Scenario 1 – Hats, Scarfs & Gloves.

    For each case (hat, glove and scarf) I used to a random number generator to pick if the twins were wearing an item or not. (Both twins dressed the same) which is a probability of 0.5 for each item.

    I then used another random number generator to select which detector Alice and Bob would chose. (Hat, scarf or glove detector) and of course this is 0.333 probability for selecting a particular detector.

    So how it works is, if Alice chooses the ‘Hat’ detector and she finds her twin is wearing a hat, then this is a positive detection. If she finds the child is not wearing a hat then this is a negative detection. And so on depending on what detector her and Bob had selected and the item of clothing the twins were wearing.

    I then compared Alice and Bob’s result for each test. If both had the same result (e.g. both made a positive detection or both made a negative detection) then I classed this as a MATCH. If one had made a detection but the other not, then this was a FAIL

    I did this by formula in the rows, making 10,000 rows or tests.

    The results showed that Alice & Bob had a match 3384 times out of the 10,000 tries, so a MATCH probability of 0.3384. I re-did this another 30 times, as it is just pressing a button to recalculate, and the average of the 30 tests was a probability of a MATCH of 0.3385.

    As a check, I added up all the positive detections from Alice and Bob, which came to a probability of 0.5007. Which as I understand it is what you would expect. i.e. they make a positive detection 50% of the time, as there is a 50% chance of a child wearing a hat, scarf or glove.

    Scenario 2 – Photon Polarisation

    I'm not sure I fully understand the actual physics here, but sort of get the probabilities. So for this I used a random number generator between 0 and 359 to randomly choose the angle of the pair of photons, (both the same) assuming this was the Z direction (Or up /down) Also angles were in integers.

    I then used a random number generator between 1 and 3 to select at random which detector Alice and Bob would choose separately. 1= 0 degrees, 2 = 120 degrees and 3 = 240 degrees.

    For both Alice and Bob, I then did a calculation of probability of detection depending on the angle of the photon and detector selected. E.g. if Alice chose the 120 degree detector and the angle of the photon was 200, then I subtract 120 from 200 to get 80 degrees, and then work out the probability of detection using the formula COS² Ɵ

    Once I had the probability, to simulate a test, I generated a random number between 0 and 1, (I think excel does this to 6 decimal places. If the random number was less than or equal to the probability calculated for detection, then I classed this as a positive detection, otherwise it is a negative detection.

    So for example, if the probability for detection is 0.75 and the random number generated is less, say 0.563, then this is a positive detection. If the probability was say 0.1 and then random number greater, say 0.333 then this is a negative detection.

    I then compared Alice and Bob’s results and where they had the same result (e.g. both positive or both negative result) then I classed this as a MATCH. If not it was a FAIL.

    Like in the first case I did these formulas in rows and made 10,000 rows.

    I than ran the simulation of 10,000 rows 30 times and got a final MATCH probability of 0.2506, comparing the total number of matches to the total number of tests.

    I also calculated the total probability of each individual test, which was 0.5024, which seems to make sense.

    So In conclusion, I get 0.3385 probability of MATCH in the classical case and 0.2506 in the simulated quantum case. Which seems to match experimental results.

    I can also change the detector angles and get different match probabilities.

    The only thing I am not sure about is if I should have compared MATCHES to the total number of tests, OR only where Alice and Bob had chosen a different detectors.

    Does all that make sense?


    As an observation, I checked the cases where Alice and Bob had selected the same detector, hence were testing the same angle, and around 850 times the simulation got a different test result for Alice compared to Bob.

    However, if I was to class these ‘missed detections’ mentioned as a MATCH and add them to the MATCH results, then the probability of detecting a MATCH would have been around 0.333

    So does this account for the difference seen in my observation? I assumed the probability of detection is still independent? But if I understand your statement, you are saying that in real experiments, the same angle will always give the same result? If that is the case I can understand why that disproves hidden variables. Or is that just a coincidence?
    Last edited: Mar 17, 2015
  18. Mar 18, 2015 #17


    User Avatar
    Science Advisor
    Gold Member

    Scenario 1: This is what I would expect. Something above .3333 but fairly close to it.

    Scenario 2: This may be the right answer, but the underlying calculation is wrong if you are trying to simulate Quantum Mechanics. There is no "200" (your example)! There is ONLY (and I mean only) cos^2(Alice's Angle - Bob's Angle). For this example, that is .25 and for a random sample, you will get a figure near .25 (which you did). Again, right answer, wrong formula.

    Scenario 3: This is where the flaw of the previous scenario becomes glaringly apparent. You should NOT get any differences at the SAME angle. Even with some random variability built in, it should be quite close to 0. Certainly nothing like 8%. And because of this, you think it explains something... which it doesn't because this is incorrect.

    In my simulations, this is ALWAYS exactly 0.
  19. Mar 19, 2015 #18


    User Avatar
    Gold Member

    Ok. That has confused me a bit. Why cos^2(Alice's Angle - Bob's Angle)?

    The 200 was just an example of the random polarisation of a photon. As I understand it, in order to simulate if the photon would be detected or not, I have to work out the probability of EACH detection test. So if a randomly selected detector was at 0 degrees and a randomly polarised photon was at 90 degrees, then the probability of detection is Cos^2 (90 degrees) which is 0. If I have a photon is at 200 degrees and a detector at 120 degrees, then the probability is Cos^2(200 degrees -120 degrees) which is about 0.03. As I understand it the probability is based on the difference in angle between the detector and angle of polarisation. Is that correct?

    So in order to simulate a detection, I generate a random number between 0 and 1 and if that is less than or equal to the probability, then I say that the photon was detected.

    So for each valid test (a valid test being where Alice and Bob have randomly selected different detectors, (e.g. 120 and 240 or 0 and 120...) then if Alice and Bob have the same result (Both detected or both did not detect a photon) then this is a match. So I add up all the matches and divide by the number of valid tests and I should get >= 0.333? But know in real experiments that would be 0.25

    What I found was strange is that if I simulate the above, then yes I do get >= 0.333 if I compare matches against valid results, but I always get 0.25 if I compare matches against the total number of tests. Which I assumed is just a coincidence.

    Yes, I sort of figured I am doing something wrong. But just wanted to understand the first part.
  20. Mar 19, 2015 #19
    Neat overview!
    I'm sure that you are aware of criticism by Jaynes that Bell did not strictly apply the formal rules of probability calculus, so that he -right at the start- made a possibly unwarranted assumption which allowed to proceed with a simple formula. In no paper did I find a formal proof that his objection is harmless. Do you know a paper that does?
    Not necessarily so. All teams seem to apply the same kind of reasoning of why the shortcomings in each experiment don't matter; they are not really "thinking outside the box". For example they look at using different mechanisms, without considering the possibility that an as yet unknown principle may be unifying the results. A similar thing happened with Michelson-Morley, when it seemed impossible to match mechanics with electromagnetism in a logical way.
  21. Mar 19, 2015 #20
    PS I didn't phrase that well. In fact, matching mechanics with electromagnetism was the path to the solution. But shortly before that path was chosen, the results of different experiments of Michelson-Morley (what we now call "classical" light experiments) contradicted all imagined models of light; in fact they appeared to contradict each other! It looked as if those and other experimental results could not be described with a consistent realistic model.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook