Why do experiments violating Bell's inequality matter?

In summary, the conversation discusses experiments that demonstrate violation of Bell's inequality and the concept of entanglement. The argument is that the particles have hidden variables, but experiments show that they do not behave as such. The conversation then delves into the concept of color blindness and how it relates to correlations between two individuals. Ultimately, it is concluded that any random color has an equal chance of being seen as bright by Alice and Bob, regardless of their color blindness.
  • #1
Greg-ulate
72
0
I keep looking at these experiments that demonstrate violation of Bell's inequality and I really can't figure out why anyone cares. The scenario always seems wrong in some way.

For example the EPR paradox. The argument goes like this, if you start out with a source of entangled "photons", you know that they have 100% correlated polarization. So you set up some polarizers at +120, 0, and -120 degrees. Alice and Bob can pick randomly which one they use. They do it a bunch of times and they find that they measured the polarization of the light to be in the direction of their polarizer 50% of the time independently, but recorded the same as one another only 25% of those times. This defies the idea that there are hidden variables flying around on little local point-like "particles" that are discovered upon measurement or it could alternatively allow for that idea to be true if they are also able to signal each-other superluminally.

Wait a minute, why does it defy that idea? Because if the "particles" had little hidden variables such as "I will go through polarizer 0, and -120, but not +120" then we could make a table like this

-120...0..120...A:0,B:+120...A:0, B:-120...A:+120, B:-120...%Same
Y...Y...Y.....Same.....Same...Same......1
Y...N...Y...Diff...Same...Diff......1/3
Y...Y...N.....Same...Diff....Diff......1/3
Y...N...N...Diff.....Diff...Same.....1/3
etc...

and then no matter what combination of polarizer was picked, we would never get less than 33% of the same measurements, unlike the 25% we got by experiment. Ok.. Why is this bogus? There's got to be a reason. Ok I got one.

Consider each of Alice's settings separately, draw a different table
A:-120
-120..0...120...B:+120.....B:0...%Same
Y...Y...Y...Same.....Same......1
Y...N...Y...Same...Diff......1/2
Y...Y...N...Diff...Same.....1/2
Y...N...N...Diff....Diff.....0

A:0
-120..0...120...B:+120....B:0...%Same
Y...Y...Y....Same...Same......1
N...Y...Y....Same...Diff......1/2
Y...Y...N...Diff...Same....1/2
N...Y...N...Diff....Diff......0

A:+120
-120..0...120...A:0,B:+120...A:0, B:-120...%Same
Y...Y...Y...Same....Same......1
Y...N...Y....Diff...Same....1/2
N...Y...Y...Same...Diff......1/2
N...N...Y....Diff....Diff......0

And average the same % with a weight depending on how likely it is to happen... To do that look at the intensity of light that passes through two consecutive polarizers at 120°, Cos(120°)2 = 0.25.

So a "photon" that goes through a 0 has a 25% chance of also going through a +120, and a 6.25% chance of going through all three in a row. On average for every 16 "photons" I shoot, 1 of them will go through all 3 polarizers, 3 of them will go through 2 only, and the rest of them will only go through the first one.

I define 4 states, {111, 101, 110, 100} and assign them probabilities based on observation. The 1s mean it would go through a polarizer if it encountered it.

Theres a 1/16 chance to be in state 111 in which case the "photon" will go through all 3 polarizers.

Theres a 4/16 chance that the "photon" will go through 2 filters including the 1 from 111, therefore, state 110 and 101 have a 3/16 chance.

Theres a 12/16 chance to go through 1 filter but not the second, but 3 of those could be 101 or 110 depending on which order your filters are in, so that leaves 9 for state 100.


-120..0...120...B:+120.....B:0...%Same...weight...proportion
Y...Y...Y.....Same.....Same...1...0.0625...1
Y...N...Y...Same...Diff...1/2...0.1875....3
Y...Y...N...Diff....Same...1/2...0.1875...3
Y...N...N...Diff.....Diff...0...0.5625...9
Average:............0.25

This exactly reproduces the quantum mechanical prediction. Can anyone tell me if any assumptions made depend on non-locality, non-realism, superluminal communication, counter-factual definiteness, conspiracy, magic, or an all-powerful deity?
 
Physics news on Phys.org
  • #2
Greg-ulate said:
To do that look at the intensity of light that passes through two consecutive polarizers at 120°, Cos(120°)2 = 0.25.

So a "photon" that goes through a 0 has a 25% chance of also going through a +120, and a 6.25% chance of going through all three in a row. On average for every 16 "photons" I shoot, 1 of them will go through all 3 polarizers, 3 of them will go through 2 only, and the rest of them will only go through the first one.

Once a photon passes through a 120° polarizer, its probability of passing through another 120° polarizer is 100% - always, and no matter what its history was before it reached the first 120° polarizer. See, for example, this link. This isn't an entanglement thing, it's a known fact about how individual photons behave when they interact with polarizers, observed and confirmed long before John Bell started investigating three-axis entanglement situations.

(And, in case it wasn't obvious from the previous discussion, the entanglement is broken by measurement. If two particles are entangled, the first measurements taken of each one will be correlated, but the correlation vanishes for all subsequent measurements

Edit: There's got to be a better link for explaining the basic weirdness of single-photon polarization than the one I posted above?
 
Last edited:
  • #3
I meant 120° from each other, not both at +120°. Sorry for that confusion.
 
  • #4
I have another one. This one is harder to explain so cut me some slack.

Imagine two people, Alice and Bob who are color blind. Or rather, that they can only detect brightness of a certain hue, no matter what detector they use. Write the hue that they can detect with a vector (R,B) with R being the brightness of the Red color and B being the brightness of the Blue color. Alice and Bob don't see the same brightness for the same object all the time, since there is a difference in their color blindness. Define an angle between Alice and Bob θ = arctan((RAlice-RBob)/(BAlice-BBob)).

Alice and Bob pick up randomly colored objects and write down whether or not they think that the object is bright or not. Then they compare their findings and notice the correlation. For any given object, the color can be anything that obeys the relationship R2+B2 = 1. R and B can range from -1, no trace of that color, to 1, maximum of that color.

Some examples of possible colors:
1/√2(1,-1) completely red
1/√2(-1,1) completely blue
1/√2(1,1) magenta
(1,0) reddish magenta, highest magnitude red
(0,1) violet, highest magnitude blue

Any color can be represented by (Cosθ, Sinθ).

Say Alice can only see red. To her, any color with R>0 looks bright. This corresponds to half the color wheel, with -∏<θ<∏. (Angle measured with respect to the +R direction.)

Say Bob can only see blue. To him, any color with B>0 looks bright. This corresponds to 0>θ>∏.

For any θ, Alice and Bob always see a 50% chance of a randomly colored object to be bright. Since any random color is equally probable, and the overlap of Alice and Bob's semi-circle of brightness perception depends linearly on the angle between them, you might expect that the correlation should be linear with θ.

However, consider θAlice = 0°, θBob = θ. The average color of all the colors that Alice sees as bright is 1/2(1,0) which can be written as 1/2[C1(Cosθ, Sinθ) + C2(-Sinθ, Cos θ)] for any θ, with coefficients C1, C2. Since the sum of each component has to equal the component of the sum we have

1 = C1Cosθ + C2Sinθ
0 = -C1Sinθ + C2Cosθ

Therefore

C1 = Cosθ
C2 = -Sinθ

So the average color that Alice sees as bright is

1/2(1, 0) = 1/2[Cosθ*(Cosθ, Sinθ) - Sinθ(-Sinθ, Cosθ)]

and the average color Bob sees as bright is

1/2(Cosθ, Sinθ)

the correlation of Alice and Bob's brightness conclusion is C1 = Cosθ (since the C2 part of Alice's average color is random from Bob's perspective, its correlation is 0)

This is the same analysis that predicts spin measurements. I can even generalize it further by adding green for a complete X, Y, Z analog with the same predictions as QM does for the Stern-Gerlach experiment. Its literally the same math, except in my description, the object has a "hidden variable" which is the true color, which can be measured by Alice and Bob in orthogonal directions (Red and Blue) and therefore yield an accurate value for the true color, but the measurements will be uncorrelated, just like in QM.

The "trick" in both of my examples is showing that the experiment makes a cut on the statistical sample in a certain way that reveals a correlation. The paradox comes from trying to place values that were arrived at by statistical analysis on the individual "particles". The only thing weird about QM is that single "particles" don't assume values in a continuum. Thats why we call it Quantum Mechanics. Its really not that surprising. Its like saying that on average the family household in the USA has 2.1 children. We know that means that some have 1 or 2 or 3, but nobody has 2.1 kids.
 

Attachments

  • colorwheel.png
    colorwheel.png
    20.1 KB · Views: 422
  • #5
Finally, it should also not be mystifying that QM doesn't allow us to predict certain things with absolute certainty. It can be proven that any particular theory will always be unable to predict certain things. Bell did this. At the end of the day I'm not really arguing with Bell, although based on the examples above, I am still unsure of his proof's validity.

Gödel's incompleteness theorem proves that any formal arithmetic system shall not be both consistent and complete. If its true for pure math, it damn well better be true for physics.
 
  • #6
Greg-ulate said:
...

Average:............0.25

This exactly reproduces the quantum mechanical prediction. Can anyone tell me if any assumptions made depend on non-locality, non-realism, superluminal communication, counter-factual definiteness, conspiracy, magic, or an all-powerful deity?

Your example really doesn't connect to the EPR paradox in any way.

What you have really said is that quantum optics and classical optics make similar predictions in some ways. OK, sometimes they do. That is not really a big surprise. On the other hand, there are critical ways they don't make the same predictions and classical optics yields predictions that are at variance with experiment. For example:

http://people.whitman.edu/~beckmk/QM/grangier/Thorn_ajp.pdf

"While the classical, wavelike behavior of light - interference and diffraction - has been easily observed in undergraduate laboratories for many years, explicit observation of the quantum nature of light - i.e., photons - is much more difficult. For example, while well-known phenomena such as the photoelectric effect and Compton scattering strongly suggest the existence of photons, they are not definitive proof of their existence. Here we present an experiment, suitable for an undergraduate laboratory, that unequivocally demonstrates the quantum nature of light."

To connect your example to EPR, you need to follow the Bell reasoning instead of your example. Attempt to construct a dataset of 10 or 20 triplets at your angles. This could be a stream of photons of any type. You will see that they cannot maintain the cos^2(theta) relationship between the various angle pairs. That is apples to apples.
 
  • #7
Greg-ulate said:
The paradox comes from trying to place values that were arrived at by statistical analysis on the individual "particles". The only thing weird about QM is that single "particles" don't assume values in a continuum. Thats why we call it Quantum Mechanics. Its really not that surprising. Its like saying that on average the family household in the USA has 2.1 children. We know that means that some have 1 or 2 or 3, but nobody has 2.1 kids.

True enough. It is only the LOCAL REALISTIC theories mimicking QM that create the paradox. Those MUST have individual values independent of the act of observation. Else they are not, by definition, local realistic.
 
  • #8
Greg-ulate said:
I meant 120° from each other, not both at +120°. Sorry for that confusion.

Doesn't matter, as nothing that happens downstream of the first polarizer in the chain can tell you anything about the state of the photon before it encountered that first polarizer. If incorporating these additional measurements are affecting the predicted correlations at Alice's and Bob's first polarizer, something is wrong somewhere.

Also, a question: in the "consider each of Alice's settings separately" tables, why is there no column for the case in which Bob and Alice have chosen the same setting? That will affect the fraction-same values which you carry down to the final table, the one that produces the .25 non-Bell result.
 
  • #9
DrChinese said:
http://people.whitman.edu/~beckmk/QM/grangier/Thorn_ajp.pdf
...

Attempt to construct a dataset of 10 or 20 triplets at your angles. This could be a stream of photons of any type. You will see that they cannot maintain the cos^2(theta) relationship between the various angle pairs.

Thanks for this, but what does 10 or 20 triplets mean in this context?

I can't concentrate anymore so I'll have to look at that paper later...

DrChinese said:
True enough. It is only the LOCAL REALISTIC theories mimicking QM that create the paradox. Those MUST have individual values independent of the act of observation. Else they are not, by definition, local realistic.

Individual values independent of the act of observation meaning that my electron really has a value for its x and y spin simultaneously regardless of me observing it? What if I say its got 1/√2 x and 1/√2 y? then it registers as +x 7 times out of 10, and +y 7 times out of 10...
 
  • #10
Nugatory said:
Doesn't matter, as nothing that happens downstream of the first polarizer in the chain can tell you anything about the state of the photon before it encountered that first polarizer. If incorporating these additional measurements are affecting the predicted correlations at Alice's and Bob's first polarizer, something is wrong somewhere.

Also, a question: in the "consider each of Alice's settings separately" tables, why is there no column for the case in which Bob and Alice have chosen the same setting? That will affect the fraction-same values which you carry down to the final table, the one that produces the .25 non-Bell result.

This is where I have a problem. The average value of the polarization of the "photon" after it encounters the first polarizer can't tell you anything about what happened upstream of it. But that's the average.. not the instantaneous. Angular momentum and everything is conserved, so if a "photon" makes it through the polarizer then it carries along with it all of the energy, momentum, angular momentum that it had before, its all or nothing. But the quantum state predicts the expectation value only, which is an average. The polarizer makes a cut on the statistical distribution of photons in such a way that all of the transmitted ones average out to be polarized in the direction of the filter. But if that were the case, you would find a non-unity chance to go through a second filter of the same polarization... unless you consider each "photon" to be a statistical ensemble by itself (which I do), so how can I make a statement about the instantaneous value? maybe this is my problem.

The basis of the EPR experiment is that each photon knows apriori which polarizers it will go through, I'm just giving them weights appropriate to the observed probabilities for transmission... does this violate counter-factual definiteness?

How is the .25 non-bell result calculated? they get .25 even though 1/3 of the time the analyzer settings are identical?

Thanks for your reply
 
  • #11
Greg-ulate said:
Thanks for this, but what does 10 or 20 triplets mean in this context?

Imagine 10 independent photons.

-120 0 +120
+ - +
- + +
etc.

The issue is to imagine that photons have these properties and have firm hidden variables independent of the act of observation. If they do, what are the outcomes they produce? I don't care what values you give them as long as they are specific. Saying "25% chance of X" doesn't cut it. Make 25% of them match instead. It is the internal consistency that becomes the problem.

If you don't ever try to do this exercise, you just go around in circles. (By yourself, because otherwise you won't be taken seriously.)
 

What is the Bell-EPR paradox?

The Bell-EPR paradox is a thought experiment in quantum mechanics that examines the implications of entanglement, a phenomenon where two particles become connected in such a way that the state of one particle affects the state of the other, even at great distances.

What is the significance of the Bell-EPR paradox in quantum mechanics?

The Bell-EPR paradox challenges the traditional understanding of quantum mechanics and raises questions about the nature of reality and the role of observation in determining the state of particles. It has also led to important developments in quantum information theory and quantum computing.

Is the Bell-EPR paradox a real paradox or is there a resolution?

There is still ongoing debate and research on whether the Bell-EPR paradox can be resolved. Some scientists argue that the paradox can be explained by hidden variables or alternative interpretations of quantum mechanics. Others believe that it is a true paradox that highlights the limitations of our current understanding of the universe.

What experiments have been conducted to test the Bell-EPR paradox?

Several experiments have been conducted to test the Bell-EPR paradox, including the Aspect experiment in 1982 and the CHSH experiment in 1969. These experiments have consistently shown violations of Bell's inequality, which supports the idea of non-locality and entanglement in quantum mechanics.

What implications does the Bell-EPR paradox have on our understanding of the universe?

The Bell-EPR paradox has far-reaching implications for our understanding of the fundamental nature of the universe. It challenges our classical notions of causality and locality and raises questions about the role of observation and the true nature of reality at a quantum level.

Similar threads

Replies
80
Views
4K
Replies
71
Views
3K
  • Quantum Physics
Replies
5
Views
1K
Replies
49
Views
2K
  • Quantum Physics
Replies
28
Views
1K
Replies
18
Views
1K
Replies
1
Views
827
Replies
28
Views
571
Replies
25
Views
2K
Back
Top