Entanglement and Bell’s inequality Question

In summary, Bell's inequality is a fundamental principle in quantum mechanics that is used to test for entanglement between particles. It states that entangled particles will show perfect correlation when measured at the same angle. However, this principle fails in the case of entangled photons, where different angles have different probabilities of the photon passing through a polarizer. This is in contrast to classical scenarios where the Bell inequality holds. Therefore, Bell's inequality is a relevant concept for understanding entanglement.
  • #1
rede96
663
16
Forgive the layman type question but I was doing some reading on Bell's inequality and how it disproves the hidden variable hypothesis in entanglement.

The example I looked at was from YouTube

I understand the principle of how bell's theorem works and how the tests done on polarisation of photons gave a result that was wasn't possible under Bell's inequality. But I was still sort of confused.

In the example for Bell's inequality, the author of the video uses an example where kids can be wearing hats, scarfs and gloves to represent the different 'states' that a child can be in. Then went on to explain how 'Alice' and 'Bob', would test entangled photons (which should be polarized in the exact same way) through randomly selected polarization screens. (Sorry if my terminology is crap!)

What I don't get is that if I imagine a mother with twins who say puts a hat on each twin, then if I test the twins to see if they wearing a wearing a hat. scarf or glove, then in my example, the results would always show 'hats' for both those twins.

But as I understand it, if a photon is polarised in say a vertical direction, then if I test using an a certain angle of polarization, then there is only a certain probability that a photon will pass through the screen.

So if there are a pair of entangled photons, there is the same probability that anyone of them would pass through the angled polarisation. So even though they are both polarized in the same direction, as I understand it, wouldn't there still be an independent probability for each photon to pass through the angled polarization screen? So one may pass through the angled screen and one might not.

This is different than with the twins wearing hats, as any test has to show either both are, or both are not wearing hats.

Which is a very long winded way of saying is Bell's inequality really relevant for entanglement?

Hope that makes sense.
 
Physics news on Phys.org
  • #2
rede96 said:
So if there are a pair of entangled photons, there is the same probability that anyone of them would pass through the angled polarisation. So even though they are both polarized in the same direction, as I understand it, wouldn't there still be an independent probability for each photon to pass through the angled polarization screen? So one may pass through the angled screen and one might not.

This is different than with the twins wearing hats, as any test has to show either both are, or both are not wearing hats.

Ah, there are a few details to consider here.

1. Entangled photons show "perfect correlation" when measured at the SAME angle. Measure Alice at 42 degrees and Bob can be predicted with certainty at 42 degrees.

So no, the hats etc example is relevant.

2. Bell Inequality HOLDS for the hat etc example. It FAILS for entangled photons. You cannot construct a comparable classical example in which the Bell Inequality fails.
 
  • #3
DrChinese said:
1. Entangled photons show "perfect correlation" when measured at the SAME angle. Measure Alice at 42 degrees and Bob can be predicted with certainty at 42 degrees.

Ah ok, I didn't understand that, thanks. But as you said this is at the SAME angle. But from what I understand different angles have different probabilities of the photon going through the polarizer. So in the example video Alice and Bob have three polarizers which they can choose at random, one is vertical, one at 120 degrees and one at 240 degrees. For Bell's inequality to work, then if the photon can pass through say 2 polarizers, then after it passes through the first one, there has to be an equal probability that it can pass through anyone of the two remaining polarizers? Is that right?

So what if it wasn't possible, due to the polarization of the photon, for the photon to pass through both the 120 degree and 240 degree polarizers?

If that was the case then wouldn't we only expect Alice and Bob to get the same results with a minimum probability of 0.222. Which is consistent with the experimental results of 0.25?
DrChinese said:
2. Bell Inequality HOLDS for the hat etc example. It FAILS for entangled photons. You cannot construct a comparable classical example in which the Bell Inequality fails.

If I apply my logic above (Which I know might be complete rubbish!) I could have a classical experiment where the mothers who gives out the hats, gloves and scarfs have a hidden variable, which is they will never give a child both a hat and a scarf, maybe due to some silly superstition :) In which case, if Alice and Bob randomly chose a hat, scarf or glove detector, their results would only match a >= 0.222 not >= 0.333

Or is my thought process also rubbish :)
 
  • #4
rede96 said:
So what if it wasn't possible, due to the polarization of the photon, for the photon to pass through both the 120 degree and 240 degree polarizers?
That would be an interestingly different world... But we've done the experiment, many times, and we find that the photon does indeed make it through those polarizers with some non-zero probability.

If I apply my logic above (Which I know might be complete rubbish!) I could have a classical experiment where the mothers who gives out the hats, gloves and scarfs have a hidden variable, which is they will never give a child both a hat and a scarf, maybe due to some silly superstition :) In which case, if Alice and Bob randomly chose a hat, scarf or glove detector, their results would only match a >= 0.222 not >= 0.333

Yes. That's a case in which Bell's inequality is not violated, and there's no problem constructing classical scenarios of that sort. The point of Bell's theorem is that quantum mechanics predicts a violation of the inequality, while no classical scenario does.
 
  • #5
rede96 said:
...Which is consistent with the experimental results of 0.25?

If I apply my logic above (Which I know might be complete rubbish!) I could have a classical experiment where the mothers who gives out the hats, gloves and scarfs have a hidden variable, which is they will never give a child both a hat and a scarf, maybe due to some silly superstition :) In which case, if Alice and Bob randomly chose a hat, scarf or glove detector, their results would only match a >= 0.222 not >= 0.333

Or is my thought process also rubbish :)

Haha, you will need to decide that for yourself!

As Nugatory already pointed out, QM predicts something which will violate the inequality. So that means you needs to run though the math of it before you can say. Likelihood of scarf but no hat, etc. per the video.

And keep in mind that the inequality is NOT violated for ALL angle settings - only some specific ones. The reason to look at the 0/120/240 angle settings is because it is one of the easiest cases where it is violated, also nicely symmetric; and fits Mermin's example nicely to boot. Also 0/45/60 degrees or 0/45/67.5 degrees. cos^2(60 degrees)=cos^2(120 degrees)=.25 ...
 
  • #6
Nugatory said:
That would be an interestingly different world... But we've done the experiment, many times, and we find that the photon does indeed make it through those polarizers with some non-zero probability.

DrChinese said:
And keep in mind that the inequality is NOT violated for ALL angle settings - only some specific ones

I think in my elementary way, that is what I was getting at. That once a photon is polarized in a certain direction (or an electron with a certain spin to use another example) then the detection of that polarisation is of a varying probability depending on what angle is used in the detection process.

Where as with a classical situation, as with hats scarfs etc there is no varying probability for detection of a certain state. If a child is wearing a hat and we have a hat detector, then it will detect a hat 100% of the time.

So that is why I was wondering if Bell's inequality really disproves the hidden variable explanation in Entanglement. How can using a classical tool to predict a quantum mechanical outcome be valid?

Nugatory said:
The point of Bell's theorem is that quantum mechanics predicts a violation of the inequality, while no classical scenario does.

So, let's say I was to take the hat, and glove scenario and make the detection of say hat's a varying probability depending on some other hidden variable that we aren't aware of. For example maybe pair of children are of varying height. Each pair maybe of the same height, but different pairs will have different heights. And let's assume the hat detection is height sensitive. So when they walk through the hat detector, there is a probability of detection depending on their height. Kids in a range of say heights 3ft +/- 3 inches will have a higher probability of being detected wearing hats then kids who are 3ft +/- 3 to 6 inches. As the kids walk their heads move up and down, so it can also be probabilistic depending on what part of their stride they are in.

Without doing the math, I know if I repeat the hat, scarf and gloves exercise using a probabilistic outcome for hat detection, sometimes kids wearing hats will be detected and sometimes not, which will reduce the probability of Alice and Bob getting matching results of >= 0.333. However as the kids are still wearing hats, the prediction using 'hidden variables' does not work out, so bells inequality will be violated in certain cases.

This to me is analogous to photon polarization detection, it depends on the angle of detection. Again without doing the math I should be able to work out which angles give the better change of violating Bell's inequality. But that's only a guess, I am not asserting anything! But will try and do the math later.

I know because I am just a layman that I am probably missing something obvious, but for my simple logic, it doesn't necessarily show that a violation of Bell's inequality disproves hidden variables. It just says we don't know enough about it. So I can't get my head around why the correlation of paired photons can't still be determined at creation.
 
Last edited:
  • #7
rede96 said:
So that is why I was wondering if Bell's inequality really disproves the hidden variable explanation in Entanglement. How can using a classical tool to predict a quantum mechanical outcome be valid?

We aren't using a classical tool to predict a quantum mechanical outcome here. We're doing experiments to tell us what the outcome is, then comparing that outcome to the predictions of various theories.

Bell's theorem is a mathematical and logical statement: If we start with the premise that there are local hidden variables (as defined in a particular way), then we can logically prove that a particular inequality must always hold. As such, it neither confirms nor denies quantum mechanics; it's just a statement about the general properties of any theory that includes local hidden variables.

Then if some experiment shows the inequality not holding we know that one of three things is going on:
1) The premise is incorrect; there are no local hidden variables using Bell's definition.
2) There is an error in the the logical proof so the inequality does not in fact follow from the premise.
3) There is an error in the experimental procedure, so the inequality is not in fact being violated.

#2 is pretty much impossible - the proof is only a few pages long, is momma-poppa simple, and has withstood a half-century of scrutiny by some of the world's best skeptics.
#3 is wildly implausible - many different teams using many different technologies have done many different experiments with different types of entangled particles, so they would somehow have to have made independent mistakes that all happened to yield the same wrong answer.

So we're left with #1: The premise is incorrect.
 
  • #8
Nugatory said:
We aren't using a classical tool to predict a quantum mechanical outcome here. We're doing experiments to tell us what the outcome is, then comparing that outcome to the predictions of various theories.

Bell's theorem is a mathematical and logical statement: If we start with the premise that there are local hidden variables (as defined in a particular way), then we can logically prove that a particular inequality must always hold. As such, it neither confirms nor denies quantum mechanics; it's just a statement about the general properties of any theory that includes local hidden variables.

Then if some experiment shows the inequality not holding we know that one of three things is going on:
1) The premise is incorrect; there are no local hidden variables using Bell's definition.
2) There is an error in the the logical proof so the inequality does not in fact follow from the premise.
3) There is an error in the experimental procedure, so the inequality is not in fact being violated.

#2 is pretty much impossible - the proof is only a few pages long, is momma-poppa simple, and has withstood a half-century of scrutiny by some of the world's best skeptics.
#3 is wildly implausible - many different teams using many different technologies have done many different experiments with different types of entangled particles, so they would somehow have to have made independent mistakes that all happened to yield the same wrong answer.

So we're left with #1: The premise is incorrect.
Ok, thanks. I guess I can have quite brute force attack to understanding something at times. So was not challenging Bell's theorem or any of the proof you mention. But the logic, in the way I have viewed the problem, doesn't make sense to me personally. I haven't had that "aha" moment where I understand why there can't be hidden variables.

So will continue to read up on it.
 
  • #9
rede96 said:
So, let's say I was to take the hat, and glove scenario and make the detection of say hat's a varying probability depending on some other hidden variable that we aren't aware of. For example maybe pair of children are of varying height. Each pair maybe of the same height, but different pairs will have different heights. And let's assume the hat detection is height sensitive. So when they walk through the hat detector, there is a probability of detection depending on their height. Kids in a range of say heights 3ft +/- 3 inches will have a higher probability of being detected wearing hats then kids who are 3ft +/- 3 to 6 inches. As the kids walk their heads move up and down, so it can also be probabilistic depending on what part of their stride they are in.

You are describing the so-called "fair sampling loophole". It's not a problem for the theorem itself, because the theorem is a statement about what will be observed if we do have a fair random sample of the experimental results; as far as the theorem is concerned, there's nothing surprising about finding the inequality violated when we don't have a fair random sample of the experimental results. Although not a problem for the theorem, it is a potential problem for the experiments; an experiment that is biased by an effect such as the one that you describe might find a violation in the biased subset that it is looking at, and this would fall under #3 ("There is an error in the experimental procedure, so the inequality is not in fact being violated") of my post above.

However, there are two reasons to trust the experiments here. First, they have been done with different types of particles entangled on very different properties (photon polarization, particle spin) and there's no single mechanism that could bias them both in ways that would create an inequality violation. In your example, a height bias might explain the anomaly for children and hats... but if we saw similar anomalies with dogs and traits such as long hair and cropped ears, we'd need to assume a whole different mechanism that somehow produced exactly the same statistical bias. That's a bit of a stretch.

Second, several of the experiments have been run with detector efficiencies high enough that we know that we're looking at all of the results, not just a subset that happens to behave differently.
 
  • #10
rede96 said:
Without doing the math, I know if I repeat the hat, scarf and gloves exercise using a probabilistic outcome for hat detection, sometimes kids wearing hats will be detected and sometimes not, which will reduce the probability of Alice and Bob getting matching results of >= 0.333. ...

You are using the example of the matching incorrectly when you reference .333. Let's translate it to your example.

Imagine you ask: what is the likelihood of a hat and a scarf, OR no hat and no scarf. Let's call either of those a MATCH. A mismatch is a hat and no scarf, OR a scarf and no hat. Similar match/mismatch definitions for the hat/glove pair, and the scarf/glove pair.

From ANY sample of twins you care to supply (both dressed the same), if I get to select which 2 of hat/glove/scarf you look at, the MATCH ratio will NEVER be less than .33 (given a sufficient sample size). It can go as high as 1.00.

But... the analogous QM sample could be as low as .25. This is David Mermin's example.

While the math seems difficult, it really isn't. Try it with 3 coins in a triangle. You will quickly see that the likelihood of any pair of coins being the same (head-head or tail-tail) CANNOT be less than .333. Same example as above. The ONLY way you can ever get the odds below .33 is if you CHEAT. That meaning: you know the outcomes in advance and select your measurements accordingly. That is called observer dependent. Note that you need to know both outcomes, 1 isn't enough for this game.
 
  • #11
DrChinese said:
You are using the example of the matching incorrectly when you reference .333. Let's translate it to your example.

Imagine you ask: what is the likelihood of a hat and a scarf, OR no hat and no scarf. Let's call either of those a MATCH. A mismatch is a hat and no scarf, OR a scarf and no hat. Similar match/mismatch definitions for the hat/glove pair, and the scarf/glove pair.

From ANY sample of twins you care to supply (both dressed the same), if I get to select which 2 of hat/glove/scarf you look at, the MATCH ratio will NEVER be less than .33 (given a sufficient sample size). It can go as high as 1.00.

But... the analogous QM sample could be as low as .25. This is David Mermin's example.

While the math seems difficult, it really isn't. Try it with 3 coins in a triangle. You will quickly see that the likelihood of any pair of coins being the same (head-head or tail-tail) CANNOT be less than .333. Same example as above. The ONLY way you can ever get the odds below .33 is if you CHEAT. That meaning: you know the outcomes in advance and select your measurements accordingly. That is called observer dependent. Note that you need to know both outcomes, 1 isn't enough for this game.
Hi and thanks for the reply.

Your explanation helped me to understand the classical case better, thanks. But I am still struggling with the same thing that was puzzling me before and that is that in the classical case, there is 50% probability of someone wearing a hat, scarf or glove BUT if someone is wearing a hat, glove or scarf, there is 100% probability it will be detected. This is not the case for detecting Photon polarisation.

Let me try and explain my rationale:

In the video example, we are detecting photon polarisation using 3 detectors at 3 different angles (0, 120 and 240). And simply seeing if we detect a photon or not.

However, in the case with photon polarisation, if a photon is in a certain angular state, the probability of detection is not always 100%, it depends on the angle of polarisation of the photon in relationship to the angle of the detector.

So for example (Using cos²θ, which I understood to be the way to work out the probability), if a photon pair is polarised in the 'Up' direction i.e. 0 degrees, then the probabilities of detection in the 3 cases mentioned are 0 degrees = 1, 120 degrees = 0.25 and 240 degrees = 0.25. And these probabilities vary depending on the angle of the photon, so the probability relationship between the 3 detectors varies too.

Which is a bit like saying we might not detect a kid who is wearing a hat if the detector is at an angle to him.

I know in principle it doesn't matter what the probability of someone placing a hat, scarf or glove on a kid, so it doesn't have to 50%, but the detection must be accurate or events that have been randomly generate will not be detected.

For example, if we have a pair of twins both wearing a hat and a scarf, Bob randomly chooses the 'Hat' detector and Alice randomly chooses the 'Scarf' detector. What we should see as a MATCH in the results. But what is actually recorded is NO MATCH, because the Bob's detector didn't pick up the 'Hat'. BUT we could have the exact same situation occur again, (i.e. two photons polarised at the exact same angle as in the first case) but this time both Bob and Alice's detectors trigger and we get a MATCH.

So my logic says this must skew the results as we are recording less MATCHES then have actually been randomly generated. So it must affect the outcome of the result and reduce the total MATCH probability? In my mind that is no different to me just crossing off a few MATCHES at the end of the test in order to reduce the outcome.

So where am I going wrong?
 
  • #12
rede96 said:
So my logic says this must skew the results as we are recording less MATCHES then have actually been randomly generated. So it must affect the outcome of the result and reduce the total MATCH probability? In my mind that is no different to me just crossing off a few MATCHES at the end of the test in order to reduce the outcome.

Such missed detections do indeed skew the results; by increasing the number of mismatches, they reduce the measured size of the inequality violation relative to the actual size of the violation across the entire population of test particles. Thus, the violation of the inequality across the entire population will be larger than the measured violation - which makes experiments that find violations more convincing evidence, not less.

It is true that this argument depends on fair sampling, as I described above - but the fair sampling loophole is pretty well closed by some experiments.

If you haven't already seen http://en.wikipedia.org/wiki/Loopho...ents#Detection_efficiency.2C_or_fair_sampling, it's worth a look.
 
Last edited:
  • #13
Nugatory said:

Thanks for the link. I had a quick read but it looks a bit over my head! But will try and go through it later.

Nugatory said:
which makes experiments that find violations more convincing evidence, not less.

And there lies the crux of the problem for me. I can see that the experiments prove quantum mechanics but I haven't quite grasped how they disprove hidden variables.
 
  • #14
rede96 said:
However, in the case with photon polarisation, if a photon is in a certain angular state, the probability of detection is not always 100%, it depends on the angle of polarisation of the photon in relationship to the angle of the detector. ...
So where am I going wrong?

That is not correct. Of course there are inefficiencies etc. but those can be ignored for our case, which is the IDEAL case. A beam splitter is used to determine polarization so that all photons are detected. In actual experiments there is no statistical connection between the observation angle and the detection ratio. So as long as that is true, there is nothing to talk about.

Nugatory is completely correct but for your purposes, forget the fair sampling issue UNTIL you understand the main issue. Go back to the hat/scarf/glove example and DON'T leave it until you understand that the classical example and the quantum examples are in contradiction.

So... the next step for you should be to frame your question in terms of the example we have been discussing. The analogy is that no hat and hat are both detected with equal efficiency and you should assume it is 100%. Once you master this example, we can move on the sampling. Remember: there are always perfect correlations when the SAME angle is measured. That should serve to remind you of the issue of hidden variables.
 
  • Like
Likes Nugatory
  • #15
It is true that some polarizers only allow some light through. But most Bell tests use Polarizing Beam Splitters (PBS) which separate the input stream into a H output stream and a V output stream. Detectors are placed at BOTH outputs. The PBS itself is oriented at any angle you desire. As I said previously, there is no useful connection between the likelihood of a detection as H or as V which is in any manner dependent on the PBS orientation.

In our hat/glove/scarf example: you will see Hat/NoHat at very equal rates, as Scarf/NoScarf and Glove/NoGlove. You can hand prepare any* sample, as a matter of fact, and the Bell Inequality will *always* be respected as long as the choice of measurements I select are unknown to you in advance.

Detection rate is not an issue in this. But even if it was, you should be able to see that: UNLESS there is a special connection between the 2 settings being chosen AND the chance of detection, the result will be a fair sample. So again, detection rate would not be an issue. If there were such a special connection, it would need to be non-local since you can switch settings very fast (such tests have been run).

*large enough...
 
  • #16
DrChinese said:
So... the next step for you should be to frame your question in terms of the example we have been discussing. The analogy is that no hat and hat are both detected with equal efficiency and you should assume it is 100%. Once you master this example, we can move on the sampling.
Firstly, thanks very much to you and Nugatory for your help on this. It is very much appreciated and I am trying to get my head around it!So to help me to do that, in my own clumsy way, I decided to make a simulation for both the classic and quantum scenarios in a spreadsheet. I hope I have done this correctly, and have listed my method and results below.

Scenario 1 – Hats, Scarfs & Gloves.

For each case (hat, glove and scarf) I used to a random number generator to pick if the twins were wearing an item or not. (Both twins dressed the same) which is a probability of 0.5 for each item.

I then used another random number generator to select which detector Alice and Bob would chose. (Hat, scarf or glove detector) and of course this is 0.333 probability for selecting a particular detector.

So how it works is, if Alice chooses the ‘Hat’ detector and she finds her twin is wearing a hat, then this is a positive detection. If she finds the child is not wearing a hat then this is a negative detection. And so on depending on what detector her and Bob had selected and the item of clothing the twins were wearing.

I then compared Alice and Bob’s result for each test. If both had the same result (e.g. both made a positive detection or both made a negative detection) then I classed this as a MATCH. If one had made a detection but the other not, then this was a FAIL

I did this by formula in the rows, making 10,000 rows or tests.

The results showed that Alice & Bob had a match 3384 times out of the 10,000 tries, so a MATCH probability of 0.3384. I re-did this another 30 times, as it is just pressing a button to recalculate, and the average of the 30 tests was a probability of a MATCH of 0.3385.

As a check, I added up all the positive detections from Alice and Bob, which came to a probability of 0.5007. Which as I understand it is what you would expect. i.e. they make a positive detection 50% of the time, as there is a 50% chance of a child wearing a hat, scarf or glove.Scenario 2 – Photon Polarisation

I'm not sure I fully understand the actual physics here, but sort of get the probabilities. So for this I used a random number generator between 0 and 359 to randomly choose the angle of the pair of photons, (both the same) assuming this was the Z direction (Or up /down) Also angles were in integers.

I then used a random number generator between 1 and 3 to select at random which detector Alice and Bob would choose separately. 1= 0 degrees, 2 = 120 degrees and 3 = 240 degrees.

For both Alice and Bob, I then did a calculation of probability of detection depending on the angle of the photon and detector selected. E.g. if Alice chose the 120 degree detector and the angle of the photon was 200, then I subtract 120 from 200 to get 80 degrees, and then work out the probability of detection using the formula COS² Ɵ

Once I had the probability, to simulate a test, I generated a random number between 0 and 1, (I think excel does this to 6 decimal places. If the random number was less than or equal to the probability calculated for detection, then I classed this as a positive detection, otherwise it is a negative detection.

So for example, if the probability for detection is 0.75 and the random number generated is less, say 0.563, then this is a positive detection. If the probability was say 0.1 and then random number greater, say 0.333 then this is a negative detection.

I then compared Alice and Bob’s results and where they had the same result (e.g. both positive or both negative result) then I classed this as a MATCH. If not it was a FAIL.

Like in the first case I did these formulas in rows and made 10,000 rows.

I than ran the simulation of 10,000 rows 30 times and got a final MATCH probability of 0.2506, comparing the total number of matches to the total number of tests.

I also calculated the total probability of each individual test, which was 0.5024, which seems to make sense.

So In conclusion, I get 0.3385 probability of MATCH in the classical case and 0.2506 in the simulated quantum case. Which seems to match experimental results.

I can also change the detector angles and get different match probabilities.

The only thing I am not sure about is if I should have compared MATCHES to the total number of tests, OR only where Alice and Bob had chosen a different detectors.

Does all that make sense?

EDIT:

As an observation, I checked the cases where Alice and Bob had selected the same detector, hence were testing the same angle, and around 850 times the simulation got a different test result for Alice compared to Bob.

However, if I was to class these ‘missed detections’ mentioned as a MATCH and add them to the MATCH results, then the probability of detecting a MATCH would have been around 0.333

DrChinese said:
Remember: there are always perfect correlations when the SAME angle is measured. That should serve to remind you of the issue of hidden variables.

So does this account for the difference seen in my observation? I assumed the probability of detection is still independent? But if I understand your statement, you are saying that in real experiments, the same angle will always give the same result? If that is the case I can understand why that disproves hidden variables. Or is that just a coincidence?
 
Last edited:
  • #17
rede96 said:
Scenario 1 – Hats, Scarfs & Gloves.

The results showed that Alice & Bob had a match 3384 times out of the 10,000 tries, so a MATCH probability of 0.3384. I re-did this another 30 times, as it is just pressing a button to recalculate, and the average of the 30 tests was a probability of a MATCH of 0.3385.

As a check, I added up all the positive detections from Alice and Bob, which came to a probability of 0.5007. Which as I understand it is what you would expect. i.e. they make a positive detection 50% of the time, as there is a 50% chance of a child wearing a hat, scarf or glove.

Scenario 2 – Photon Polarisation

For both Alice and Bob, I then did a calculation of probability of detection depending on the angle of the photon and detector selected. E.g. if Alice chose the 120 degree detector and the angle of the photon was 200, then I subtract 120 from 200 to get 80 degrees, and then work out the probability of detection using the formula COS² Ɵ

I than ran the simulation of 10,000 rows 30 times and got a final MATCH probability of 0.2506, comparing the total number of matches to the total number of tests.

I also calculated the total probability of each individual test, which was 0.5024, which seems to make sense.

Scenario 3 – Observation

As an observation, I checked the cases where Alice and Bob had selected the same detector, hence were testing the same angle, and around 850 times the simulation got a different test result for Alice compared to Bob.

Scenario 1: This is what I would expect. Something above .3333 but fairly close to it.

Scenario 2: This may be the right answer, but the underlying calculation is wrong if you are trying to simulate Quantum Mechanics. There is no "200" (your example)! There is ONLY (and I mean only) cos^2(Alice's Angle - Bob's Angle). For this example, that is .25 and for a random sample, you will get a figure near .25 (which you did). Again, right answer, wrong formula.

Scenario 3: This is where the flaw of the previous scenario becomes glaringly apparent. You should NOT get any differences at the SAME angle. Even with some random variability built in, it should be quite close to 0. Certainly nothing like 8%. And because of this, you think it explains something... which it doesn't because this is incorrect.

In my simulations, this is ALWAYS exactly 0.
 
  • #18
DrChinese said:
There is no "200" (your example)! There is ONLY (and I mean only) cos^2(Alice's Angle - Bob's Angle).

Ok. That has confused me a bit. Why cos^2(Alice's Angle - Bob's Angle)?

The 200 was just an example of the random polarisation of a photon. As I understand it, in order to simulate if the photon would be detected or not, I have to work out the probability of EACH detection test. So if a randomly selected detector was at 0 degrees and a randomly polarised photon was at 90 degrees, then the probability of detection is Cos^2 (90 degrees) which is 0. If I have a photon is at 200 degrees and a detector at 120 degrees, then the probability is Cos^2(200 degrees -120 degrees) which is about 0.03. As I understand it the probability is based on the difference in angle between the detector and angle of polarisation. Is that correct?

So in order to simulate a detection, I generate a random number between 0 and 1 and if that is less than or equal to the probability, then I say that the photon was detected.

So for each valid test (a valid test being where Alice and Bob have randomly selected different detectors, (e.g. 120 and 240 or 0 and 120...) then if Alice and Bob have the same result (Both detected or both did not detect a photon) then this is a match. So I add up all the matches and divide by the number of valid tests and I should get >= 0.333? But know in real experiments that would be 0.25

What I found was strange is that if I simulate the above, then yes I do get >= 0.333 if I compare matches against valid results, but I always get 0.25 if I compare matches against the total number of tests. Which I assumed is just a coincidence.

DrChinese said:
Scenario 3: This is where the flaw of the previous scenario becomes glaringly apparent. You should NOT get any differences at the SAME angle. Even with some random variability built in, it should be quite close to 0. Certainly nothing like 8%. And because of this, you think it explains something... which it doesn't because this is incorrect.

Yes, I sort of figured I am doing something wrong. But just wanted to understand the first part.
 
  • #19
Nugatory said:
We aren't using a classical tool to predict a quantum mechanical outcome here. We're doing experiments to tell us what the outcome is, then comparing that outcome to the predictions of various theories.

Bell's theorem is a mathematical and logical statement: If we start with the premise that there are local hidden variables (as defined in a particular way), then we can logically prove that a particular inequality must always hold. As such, it neither confirms nor denies quantum mechanics; it's just a statement about the general properties of any theory that includes local hidden variables.

Then if some experiment shows the inequality not holding we know that one of three things is going on:
1) The premise is incorrect; there are no local hidden variables using Bell's definition.
2) There is an error in the the logical proof so the inequality does not in fact follow from the premise.
3) There is an error in the experimental procedure, so the inequality is not in fact being violated.
Neat overview!
#2 is pretty much impossible - the proof is only a few pages long, is momma-poppa simple, and has withstood a half-century of scrutiny by some of the world's best skeptics.
I'm sure that you are aware of criticism by Jaynes that Bell did not strictly apply the formal rules of probability calculus, so that he -right at the start- made a possibly unwarranted assumption which allowed to proceed with a simple formula. In no paper did I find a formal proof that his objection is harmless. Do you know a paper that does?
#3 is wildly implausible - many different teams using many different technologies have done many different experiments with different types of entangled particles, so they would somehow have to have made independent mistakes that all happened to yield the same wrong answer.
Not necessarily so. All teams seem to apply the same kind of reasoning of why the shortcomings in each experiment don't matter; they are not really "thinking outside the box". For example they look at using different mechanisms, without considering the possibility that an as yet unknown principle may be unifying the results. A similar thing happened with Michelson-Morley, when it seemed impossible to match mechanics with electromagnetism in a logical way.
 
  • #20
harrylin said:
[..] A similar thing happened with Michelson-Morley, when it seemed impossible to match mechanics with electromagnetism in a logical way.
PS I didn't phrase that well. In fact, matching mechanics with electromagnetism was the path to the solution. But shortly before that path was chosen, the results of different experiments of Michelson-Morley (what we now call "classical" light experiments) contradicted all imagined models of light; in fact they appeared to contradict each other! It looked as if those and other experimental results could not be described with a consistent realistic model.
 
  • #21
rede96 said:
Ok. That has confused me a bit. Why cos^2(Alice's Angle - Bob's Angle)?

The 200 was just an example of the random polarisation of a photon. As I understand it, in order to simulate if the photon would be detected or not, I have to work out the probability of EACH detection test. So if a randomly selected detector was at 0 degrees and a randomly polarised photon was at 90 degrees, then the probability of detection is Cos^2 (90 degrees) which is 0. If I have a photon is at 200 degrees and a detector at 120 degrees, then the probability is Cos^2(200 degrees -120 degrees) which is about 0.03. As I understand it the probability is based on the difference in angle between the detector and angle of polarisation. Is that correct?

Scenario 2 should be the quantum scenario. QM rules are different than what you applied. What you did was a classical example and it does not reproduce entangled statistics. This is easily seen by your attempt to validate where the angles are the same, which did not yield 100% match in your run.

That is because the quantum rules is the one I gave: cos^2(Alice's Angle - Bob's Angle). It's derivation is a bit complicated to explain and not really critical to your understanding at this point. It looks like Malus's Law but it is derived independently.

So Scenario 1 is a classical example, and 2 is the actual quantum simulation. The difference should be enough to convince that no classical data set - no matter how you rig it - will match QM.
 
  • #22
DrChinese said:
Scenario 2 should be the quantum scenario. QM rules are different than what you applied. What you did was a classical example and it does not reproduce entangled statistics.

DrChinese said:
So Scenario 1 is a classical example, and 2 is the actual quantum simulation. The difference should be enough to convince that no classical data set - no matter how you rig it - will match QM.

Yes, I understand that now thanks. But it was interesting to play around with the classical probabilities and see what sort of results I got. It helped my understanding.
DrChinese said:
That is because the quantum rules is the one I gave: cos^2(Alice's Angle - Bob's Angle).

This is what I find this interesting because as I understand it, I would get the same result (0.25) polarising single photons in one direction (e.g. 0 degrees) then measuring them again against one of the other two angles. (e.g. 120 and 240). In fact the probability calculation is the same for each combination.

So I can see that with Alice and Bob’s experiment with paired photos how the QM situation is different. It's like the measurement of one photon somehow “polarises” the other in the same direction.

But as I am still a very classically thinking person :) I can't help but think of this property being due to some strangeness that resulted from the process of creating the paired photons. So it was pre-determined.

So as a bit of fun, rather than try disprove the hidden variable argument I’d wanted to try and think of a logical argument that would disprove the spooky action at distance argument. :D

For example, I know that if I have a process for creating two separately, randomly generated photons (A and B) and polarise A to 0 degrees for example, and then randomly measure A against 120 and 240 degrees, I would get a result of 0.25. If I was then to just measure B randomly against 120 and 240, without polarising it first, then I would expect to get an outcome of 0.5.

So I can re-create this with entangled photons. Polarise A to 0 degrees, then measure B randomly against 120 and 240. If the outcome is 0.5, then I know that the polarisation of A had no effect on B. If the outcome is 0.25 then there is something spooky going on.

If it came out at 0.5, the fact that paired photons show 100% correlation when measured against the same angle would just be strangeness in the process of creating them. It would pre-determined.

Does that logic hold? Has that test been done?
 
Last edited:
  • #23
Never mind... I just posted a long question before noticing that 120 and 240 looked like 90 angle in the video but they are not... :)

Carry on.
 
Last edited:
  • #25
Nugatory said:
@rede96 You might find this Scientific american article interesting: https://www.scientificamerican.com/media/pdf/197911_0158.pdf

Thanks for the great link. It does make things a bit clearer but I am still left with the same issue as unfortunately my thought process is much more elementary.

I tend to think that any system has a unique set of properties that make it behave like it does when measured. Which could include the "measurement" as part of that system.

So when we measure a perfect negative coloration in spin between entangled electrons for example, it is the properties of that system that give that result, whatever that result turns out to be.

Now it could be that the measurement of that system is responsible in some way for the result OR it could be another property of that system OR a combination of both.

BUT if it was just solely the "measurement" part of the system that was responsible for the perfect correlation we see in electron spin, then we would see that correlation for any pair of electrons that we care to measure, entangled or otherwise. This is obviously not the case.

This implies a connection which exists between the paired particles. Now it may be that this connection only materialises instantaneously at the point of measurement, but as measurement alone is not responsible for the outcome, then there must be some property that exists in the system prior to the measurement. It can't be any other way.

So I understand that Bell's inequality demonstrates that there could not be local hidden variables, BUT that may just be local hidden variables as we understand them classically. There could still be something else going on within the system that pre-determines the outcome.
 
Last edited:
  • #26
rede96 said:
So I understand that Bell's inequality demonstrates that there could not be local hidden variables, BUT that may just be local hidden variables as we understand them classically. There could still be something else going on within the system that pre-determines the outcome.

The qualification about classical understanding is well-taken. The entire exercise can be traced back to Einstein's suggestion in the 1935 EPR paper that there ought to be an explanation for the correlations of entanglement measurements that is based on local hidden variables as we understand them classically, and therefore that uncovering these hidden variables was an open problem.

Indeed, there is a local (in the sense of respecting relativistic causality) hidden variable theory that allows Bell's inequality to be violated. It is superdeterminism (google for "Gerard 't Hooft superdeterminism" for more), and unfortunately it is even weirder than quantum mechanics and doesn't give us variables consistent with our classical understanding.

It would be fair to say that Bell's theorem and Aspect-style experiments tell us that if quantum mechanics isn't the last word, then whatever replaces it must be no less weird and at odds with our classical intuition.
 
Last edited:
  • #27
rede96 said:
Now it may be that this connection only materialises instantaneously at the point of measurement, but as measurement alone is not responsible for the outcome, then there must be some property that exists in the system prior to the measurement. It can't be any other way.

It might help to think about this...

A wave can be decomposed into a set of sine wave components having various amplitudes, phases, and frequencies. When you do that it is like using the sine wave as your measuring instrument on the original wave, and you will get a specific set of various sine waves that when recombined will produce the original wave. You have "measured the sineyness" of the original wave.

You can do this again to the same wave, but this time instead of decomposing it into sine waves, you could use cosine, or triangle, or square, impulse, or any arbitrary waveform (including any weird one you can think of) as your measuring instrument, and you would get different sets of results (sets of those particular waveforms of different amplitudes, phases, and frequencies). You will have measured the "insert decomposition waveform here" attribute of the original wave.

Quantum measurement is similar to this where the decision of what attribute to measure is like picking the measurement waveform with which to decompose the target waveform. The simple wave forms used as measures happen to correspond to "familiar" attributes like position and momentum and require relatively "simple" experimental apparatus and procedures to perform.

But the point is that one could decide to measure the wave using some very peculiar decomposition waveform analogous to the honk of a car horn or a voice saying "Hello!"... this will return a set of component waves in that peculiar form of different amplitudes, phases, and frequencies, but assigning the attribute you just measured to some familiar physical property is not going to happen, and arraigning an experimental preparation to do so would be impossibly complex... but you would have measured the "car horn honkyness" or the "Hello voicyness" of the wave, whatever those attributes might mean.

In this sense, the original wave to be examined has no intrinsic attributes (even simple ones) apart from first operationally choosing the waveform perspective with which it might be decomposed... the necessary choice of measurement is responsible for the outcome and the attribute does not exist independently nor in advance of operationally choosing a perspective through which to examine the original wave...

And if I have butchered this explanation technically somewhat, anyone please feel free to comment and clarify.
 
  • #28
bahamagreen said:
But the point is that one could decide to measure the wave using some very peculiar decomposition waveform analogous to the honk of a car horn or a voice saying "Hello!"... this will return a set of component waves in that peculiar form of different amplitudes, phases, and frequencies, but assigning the attribute you just measured to some familiar physical property is not going to happen, and arraigning an experimental preparation to do so would be impossibly complex... but you would have measured the "car horn honkyness" or the "Hello voicyness" of the wave, whatever those attributes might mean.

I think I get your point, that being that just because I can measure something about a system doesn't mean that I can attribute that measurement to a physical property. But even if I was to measure the colour of a car for example by honking a horn at it, and then measuring the sound waves reflected. If I got repeatable results or results which correlated to different size horns, then there must be some physical property about that system(the car) that is responsible for that. Even if it wasn't was I was hoping to measure.

So for me there is some physical attribute of entangled particles that yield the results from the measurement exercise discussed.
 
  • #29
rede96 said:
If I got repeatable results or results which correlated to different size horns, then there must be some physical property about that system(the car) that is responsible for that.

That is pretty much Einstein's definition of "element of reality" in the EPR paper (if you haven't read it yet, it's worth reading). It works just fine as long as you're willing to give up locality, accept that the physical property of the system which is responsible for the measurement of one member of the entangled is influenced by what we do to the other member.
 
  • #30
Nugatory said:
at is pretty much Einstein's definition of "element of reality" in the EPR paper (if you haven't read it yet, it's worth reading).

I did read the original paper but it got a bit too mathematical for me. So I tend to read / research the dummy's versions of QM. Which I know isn't always great as some of the simple explanations are really misleading, but along with some MIT and other lectures I have watched it has given me the basics I think.

Nugatory said:
It works just fine as long as you're willing to give up locality, accept that the physical property of the system which is responsible for the measurement of one member of the entangled is influenced by what we do to the other member.

And there is the issue for me. I don't pretend to know a lot about this subject, I just find it so intriguing. And I am very open minded. But even from everything I have read so far I can't honestly say I fully understand it enough to be convinced there aren't hidden variables of some nature that pre-determine the outcome of two entangled particles being measured specifically for spin direction.

As I have said before I have a bit of a brute force approach to learning, so I have been trying to design a classical experiment that will give the same QM result for the tests being discussed in this thread. I know that is very unlikely (and probably not possible!) but each failed attempt helps me to understand things a bit more. So I hope people don't miss-read my intentions. I make no claims that Bell's inequality is false or anything else. But by testing it in my own elementary way, I find it helps me understand things a bit more. And I enjoy it.

I was really hoping I could find out how to calculate different QM outcomes for testing the spin of entangled particles when using different angles other than 0, 120 and 240. Say, 10, 60 and 200. I did make another post but so far no luck. So was hoping someone could point me in the right direction?
 
  • #31
rede96 said:
I was really hoping I could find out how to calculate different QM outcomes for testing the spin of entangled particles when using different angles other than 0, 120 and 240. Say, 10, 60 and 200. I did make another post but so far no luck. So was hoping someone could point me in the right direction?

The normal way you would do that would be to weight each of the options as 1/3 and sum to get an average. So:

a) Match(10,60)= .4132
b) Match(10,200)= .9698
c) Match(60,200)= .5868

However, there is no guarantee that you will get a scenario that leads to a Bell-like inequality in the exact same manner as before. The other one "happens" to work out nicely for the job. Most combinations of angles that lead to nonsensical results but it is sometimes quite complicated to see the problem. For example, there is a specific combo for the triple 10, 60 and 200 degree that is impossible to obtain classically. I know because I build a generator that graphs the permutations.

The quantum prediction for case (reference above a/b/c):

a) + ~b) - c is -.1434 (you can easily obtain that from above) whereas the classical predication is >=0; note that ~b in this equation is Mismatch(10,200) or .0302.

This probably makes no sense as I have described it, but the formula above works out to be a couple of permutations. Ergo, the classical sum must always be zero or above (for any set of outcome permutations) where experiment yields a different value. And yes, it's negative.
 
  • #32
DrChinese said:
The normal way you would do that would be to weight each of the options as 1/3 and sum to get an average. So:

a) Match(10,60)= .4132
b) Match(10,200)= .9698
c) Match(60,200)= .5868

However, there is no guarantee that you will get a scenario that leads to a Bell-like inequality in the exact same manner as before.

Thank you for that. But at the moment I wasn't looking for a Bell-like inequality I was just trying to find the right way to calculate the quantum result for the experiments if it was done in the same way as 0, 120 and 240 but using the angles above. (or any set of angles) Which if I understood you would be 0.6566?

So if I did 100,000 tests where Alice and Bob selected at random from detectors at 10, 60 and 200 the ratio of valid matches would be 0.6566. Is that correct?

DrChinese said:
The other one "happens" to work out nicely for the job. Most combinations of angles that lead to nonsensical results but it is sometimes quite complicated to see the problem. For example, there is a specific combo for the triple 10, 60 and 200 degree that is impossible to obtain classically. I know because I build a generator that graphs the permutations.

The quantum prediction for case (reference above a/b/c):

a) + ~b) - c is -.1434 (you can easily obtain that from above) whereas the classical predication is >=0

Ah ok, so that is just one of the 8 possible permutations. e.g. (A+, B-, C-)? But if I look at the original example of 0,120,240, wouldn't I still get a negative result for this permutation? E.g. 0.25 + (-0.25) + (-0.25) = -0.25?

DrChinese said:
note that ~b in this equation is Mismatch(10,200) or .0302.

I didn't quite understand that. Could you explain why that is a mismatch and where 0.0302 comes from please? That seems to be around 80 degrees working backwards.

DrChinese said:
This probably makes no sense as I have described it, but the formula above wo255rks out to be a couple of permutations. Ergo, the classical sum must always be zero or above (for any set of outcome permutations) where experiment yields a different value. And yes, it's negative.

Ok, that really confused me! So are you saying that it is possible that the total results from an experiment (ie multiple testing of the three angles) will give a negative ratio of matches? I think I have misunderstood that as you can only get one of two results from each test, either Alice and Bob's detectors match or the don't. So impossible to get a negative number of matches as the ratio is total matches / total number of valid tests.

As I said at the moment, I just wanted to know if there was a way I could calculate what the experimental result would be using different angles. Which if I understood was just the average of the 3 probability options you mentioned.
 
  • #33
rede96 said:
1. Thank you for that. But at the moment I wasn't looking for a Bell-like inequality I was just trying to find the right way to calculate the quantum result for the experiments if it was done in the same way as 0, 120 and 240 but using the angles above. (or any set of angles) Which if I understood you would be 0.6566? ... So if I did 100,000 tests where Alice and Bob selected at random from detectors at 10, 60 and 200 the ratio of valid matches would be 0.6566. Is that correct?

2. Ah ok, so that is just one of the 8 possible permutations. e.g. (A+, B-, C-)? But if I look at the original example of 0,120,240, wouldn't I still get a negative result for this permutation? E.g. 0.25 + (-0.25) + (-0.25) = -0.25?

3. Could you explain why that is a mismatch and where 0.0302 comes from please? That seems to be around 80 degrees working backwards.

1. Assuming you selected 2 of the 3 angles to observe randomly, yes.2. 8 permutations, yes that is very good. +++, ++-, ... , ---. It turns out that 2 of the 8 cases can be represented in the manner I provided. Those cases are: ++- and --+. You can best see the logic chain in detail at a page of mine. That page basically shows you how to arrange the terms so this result is obtained.

http://www.drchinese.com/David/Bell_Theorem_Negative_Probabilities.htm

Basically, you go back to this:

a) Match(10,60)= .4132
b) MisMatch(10,200)= .0302
c) Match(60,200)= .5868

If you think of the 8 permutations, then each term above represents 4 of them. The terms can be rearranged* in such a way ( a + b - c) = 2 * P(++- , --+). And that equals -.1434 (obtained from: .4132 + .0302 - .5868) as I said previously. Obviously a classical prediction for cases 2 * P(++- , --+) is greater than or equal to 0. In actual experiments, you never see a negative result directly. You see experimental support for the individual a, b and c terms above. But assuming you also believe that Alice's choice of angle does not affect Bob's outcome, and vice versa, then the classical prediction and the experimental prediction (which is the same as: a + b - c or -.1434) are inconsistent.

So the conclusion from all of this manipulation is: any 3 angles you pick (with a few exceptions) will ultimately lead to a Bell Inequality. But the 0/120/240 case is easier to visualize.

3. Mismatch % is of course nothing but (1 - Match %). So 1 - .9698 = .0302* The rearrangement can be seen if you study the link. Basically: a b and c each represent 4 of the 8 permutations, but each a different 4. When you combine them via a + b - c you get cancellation of some terms, leaving you with 2 sets of terms for ++- , --+. This is equal to 2 * P(++- , --+). QED.
 
  • #34
Yes, the above is a bit convoluted. But this is simply a way to prepare a Bell Inequality. The reason there are examples like hats/scarves/gloves is that it is easier to follow than the detail above. But it is really not that hard once you jump all the way in.
 
  • #35
DrChinese said:
Yes, the above is a bit convoluted. But this is simply a way to prepare a Bell Inequality. The reason there are examples like hats/scarves/gloves is that it is easier to follow than the detail above. But it is really not that hard once you jump all the way in.

Thanks for your help with this. I think I understand the EPR argument a lot better now and can see how Bell's inequality and experimental results lead to the conclusion that there is something very non-classical going on. But my mind still struggles accepting that this is spooky action at a distance, even though it is obvious one is forced to accept that conclusion.

I was also curious to know is entanglement something that persists? So once the wave function has collapsed then I assume that anything further I do to particle A, such as run it through a magnetic field and change the spin direction for example, would have no effect on partial B? Or are they forever entangled?
 
<h2>What is entanglement?</h2><p>Entanglement is a phenomenon in quantum mechanics where two or more particles become connected in such a way that the state of one particle is dependent on the state of the other, even when they are separated by large distances.</p><h2>What is Bell's inequality?</h2><p>Bell's inequality is a mathematical expression that tests the limits of classical physics by predicting the maximum correlation between two particles that are not entangled. If this inequality is violated, it suggests that quantum mechanics is the more accurate description of reality.</p><h2>How does entanglement relate to Bell's inequality?</h2><p>Entanglement is a key concept in Bell's inequality. The violation of Bell's inequality demonstrates that entanglement exists and that classical physics cannot fully explain the behavior of entangled particles.</p><h2>What are some real-world applications of entanglement and Bell's inequality?</h2><p>Entanglement has potential applications in secure communication, quantum computing, and quantum teleportation. Bell's inequality has been used to test the foundations of quantum mechanics and to develop new technologies such as quantum cryptography.</p><h2>Can entanglement and Bell's inequality be explained in simple terms?</h2><p>While the concepts of entanglement and Bell's inequality can be difficult to understand, they can be explained in simple terms. Entanglement is like a pair of gloves that are always connected, no matter how far apart they are. Bell's inequality is like a test that determines if the gloves are truly connected or if they are just coincidentally similar.</p>

What is entanglement?

Entanglement is a phenomenon in quantum mechanics where two or more particles become connected in such a way that the state of one particle is dependent on the state of the other, even when they are separated by large distances.

What is Bell's inequality?

Bell's inequality is a mathematical expression that tests the limits of classical physics by predicting the maximum correlation between two particles that are not entangled. If this inequality is violated, it suggests that quantum mechanics is the more accurate description of reality.

How does entanglement relate to Bell's inequality?

Entanglement is a key concept in Bell's inequality. The violation of Bell's inequality demonstrates that entanglement exists and that classical physics cannot fully explain the behavior of entangled particles.

What are some real-world applications of entanglement and Bell's inequality?

Entanglement has potential applications in secure communication, quantum computing, and quantum teleportation. Bell's inequality has been used to test the foundations of quantum mechanics and to develop new technologies such as quantum cryptography.

Can entanglement and Bell's inequality be explained in simple terms?

While the concepts of entanglement and Bell's inequality can be difficult to understand, they can be explained in simple terms. Entanglement is like a pair of gloves that are always connected, no matter how far apart they are. Bell's inequality is like a test that determines if the gloves are truly connected or if they are just coincidentally similar.

Similar threads

Replies
71
Views
3K
Replies
1
Views
820
Replies
80
Views
4K
  • Quantum Physics
Replies
27
Views
800
  • Quantum Physics
Replies
8
Views
921
Replies
4
Views
1K
Replies
50
Views
3K
  • Quantum Physics
Replies
1
Views
750
Replies
1
Views
1K
Replies
17
Views
1K
Back
Top