Bell's theorem: actual experiment event by event

In summary, the conversation revolves around a specific experiment being discussed in a physics forum. The goal is to understand the setup of the experiment and its results. There are several questions and uncertainties raised, including the randomness of polarization angles, the possibility of using two independent random number generators, and the behavior of photons that do not pass through the polarizing beam splitter. The experiment itself involves measuring the correlation between two photons emitted at specific angles and passing through polarizers before being detected. The conversation also touches on the violation of Bell's inequality and the implications of this experiment for strict Einstein locality.
  • #1
humbleteleskop
114
0
So many different experiments, so many variations, so many versions of interpretations and story-telling abstractions. Rather than trying to understand what measurements mean, I would like to know how are they performed. Rather than theoretical, I'm interested in practical aspects of an actual experimental setup.

I don't know what experiment to talk about. I hope someone can point to some of the simpler ones for which there is an actual measurement data and the details of the equipment. I'll start with this one, described in post #18:

https://www.physicsforums.com/showthread.php?t=454275


Initial setup:
angle choice constants: -30, 0, +30
diagram of the whole setup?
number of measurements to perform?
expected result and margin of error?
properties of polarizers P1 and P2?
properties of of RNG1 and RNG2?
anything else relevant at this point?


Event T0:
photons L1 and L2 emitted
properties unknown?
properties uncertain?
properties certain?
anything else relevant at this point?

Event T1:
random number generator, one or two independent?
P1= RNG1(-30,0,+30)
P2= RNG2(-30,0,+30)
anything else relevant at this point?

Event T2:
L1 <interaction> P1
L2 <interaction> P2
interaction unknown?
interaction uncertain?
interaction certain?
anything else relevant at this point?

Event T3:
measurement: what where, how, why?
anything else relevant at this point?



So the goal is to fill up this table of events with all the relevant information that can be said about the setup at each event point in time, or insert some new event I overlooked perhaps. At the end I expect to have clear insight of all the variables and constants describing the setup and influencing the measurements, so that I would be able to define them in a computer program in an algorithm that would try to mimic the experiment as a simulation.
 
Physics news on Phys.org
  • #2
So, I did it myself to the best of my knowledge to demonstrate how the end result should look like.


Initial setup:
A= -30, B= 0, C= +30
N_MEASURE= 0
N_REPEAT= 10,000
CORRELATED= 0


Event T0:
L1= 1
L2= 1

Event T1:
P1= RNG1(A, B, C)
P2= RNG2(A, B, C)

Event T2:
L1= L1 x P1
L2= L2 x P2

Event T3:
if L2 == L1 then CORRELATED++
N_MEASURE++
RESULT= CORRELATED/(N_MEASURE/100)
if N_MEASURE < N_REPEAT goto Event T0


Is there any property I am not accounting for? Anything that needs to be added or changed? What is "wrong" and what is "correct" result, with what margin of error?
 
  • #3
Would love to help, but not exactly sure what you expect from each event. You can measure any 2 angles and there will be 4 possible outcome permutations: ++ +- -+ --. The 2 independent RNG idea does not work, because that will yield classical probabilty. You need something where the 2 outcomes are coordinated since you are modeling entanglement.
 
  • #4
humbleteleskop said:
So, I did it myself to the best of my knowledge to demonstrate how the end result should look like.


Initial setup:
A= -30, B= 0, C= +30
N_MEASURE= 0
N_REPEAT= 10,000
CORRELATED= 0


Event T0:
L1= 1
L2= 1

Event T1:
P1= RNG1(A, B, C)
P2= RNG2(A, B, C)

Event T2:
L1= L1 x P1
L2= L2 x P2

Event T3:
if L2 == L1 then CORRELATED++
N_MEASURE++
RESULT= CORRELATED/(N_MEASURE/100)
if N_MEASURE < N_REPEAT goto Event T0


Is there any property I am not accounting for? Anything that needs to be added or changed? What is "wrong" and what is "correct" result, with what margin of error?

My prior post was directed at your prior post.

For this, I do not follow at all your notation. It further seems to be modeling different setups but that is really not clear.
 
  • #5
DrChinese said:
Would love to help, but not exactly sure what you expect from each event.

I want to describe the whole experiment and each event in the simplest terms possible. I want to discern what properties are variable and which ones are constant, which ones are certain and which are not certain, and how much.


You can measure any 2 angles and there will be 4 possible outcome permutations: ++ +- -+ --. The 2 independent RNG idea does not work, because that will yield classical probabilty. You need something where the 2 outcomes are coordinated since you are modeling entanglement.

Why two independent RNGs do not work? That would imply some property of RNG itself influences the result. Two independent RNGs should produce just as random output as a single one, shouldn't they?

How is polarization angle randomized in an actual setup? Electronically, or do they rotate physically, what animates them?

What about those photons that do not pass through? Even if the initial polarization of both photons is the same relative to their polarizer it still doesn't mean they will both actually pass through. Right?
 
  • #6
humbleteleskop said:
I want to describe the whole experiment and each event in the simplest terms possible. I want to discern what properties are variable and which ones are constant, which ones are certain and which are not certain, and how much.

Why two independent RNGs do not work? That would imply some property of RNG itself influences the result. Two independent RNGs should produce just as random output as a single one, shouldn't they?

How is polarization angle randomized in an actual setup? Electronically, or do they rotate physically, what animates them?

What about those photons that do not pass through? Even if the initial polarization of both photons is the same relative to their polarizer it still doesn't mean they will both actually pass through. Right?

Your model is not really appropriate so that is a big issue. There is only one state even though there are 2 photons. Further, there are no "certain" properties of the individual photons outside of the context of a measurement. That point is highlighted when you consider event by event analysis.

There are a variety of ways to change the settings of polarizers midflight. I will give you a reference to study. Please be aware that there is no need to do that unless you are also testing to rule out unknown classical communication channels between the polarizers as to their orientation. There is no theoretical mechanism for that, and none has ever been observed.

Not all photons are detected at the polarizing beam splitter. Since there is coincidence counting via time window, that is resolved.

http://arxiv.org/abs/quant-ph/9810080

Violation of Bell's inequality under strict Einstein locality conditions
Gregor Weihs, Thomas Jennewein, Christoph Simon, Harald Weinfurter, Anton Zeilinger (University of Innsbruck, Austria)
(Submitted on 26 Oct 1998)

We observe strong violation of Bell's inequality in an Einstein, Podolsky and Rosen type experiment with independent observers. Our experiment definitely implements the ideas behind the well known work by Aspect et al. We for the first time fully enforce the condition of locality, a central assumption in the derivation of Bell's theorem. The necessary space-like separation of the observations is achieved by sufficient physical distance between the measurement stations, by ultra-fast and random setting of the analyzers, and by completely independent data registration.
 
  • #7
DrChinese said:
For this, I do not follow at all your notation. It further seems to be modeling different setups but that is really not clear.

It's like English, but for computers. Every line is a sentence, a statement, an equation. It's plane mathematics. L1 and L2 are "light" particles, they currently have only one property, but they can have more, and any other element of the setup can have as many different properties as necessary. P1 and P2 represent polarizers properties. RNG represents a source of randomness, for each measurement it picks one of the three angles defined by constants A, B, C. CORRELATED is a variable that keeps track of how many pairs were found to be correlated at Event T3. N_MEASURE is a variable that keeps track of how many measurements have been performed up to that point in time. N_REPEAT defines how many measurements we want to perform.



Event T0:
L1= 1
L2= 1

This reads, two photons are emitted with correlated properties. Since we haven't defined anything else, it also means we assume it will be true for every single photon pair and 100% guaranteed. If there are any uncertainties or if the description is not complete, I'd like you to tell me about it.


Event T1:
P1= RNG1(A, B, C)
P2= RNG2(A, B, C)

This reads, polarizers P1 and P2 are rotated to either A, B, or C angle depending on RNG choice between the three.


Event T2:
L1= L1 x P1
L2= L2 x P2

This reads, photon L1 interacts with polarizer P1, and photon L2 interacts with polarizer P2. I used multiplication as 'interaction function', but can be expressed in many other ways including uncertainty percentages.


Event T3:
if L2 == L1 then CORRELATED++
N_MEASURE++
RESULT= CORRELATED/(N_MEASURE/100)
if N_MEASURE < N_REPEAT goto Event T0

This reads, sensors compare photons L1 and L2, and if they are "correlated" increase the counter by one. Then, increase the counter that keeps the number of measurements so we can calculate the percentage of correlations relative to the number of measurements. Then, if not all the measurements are performed yet go to Event T0 and repeat the same thing for the next two photons.
 
  • #8
humbleteleskop said:
It's like English, but for computers.

OK, I am a professional programmer. I have written models for simulating EPR experiments with many iterations, so I certainly know how to do that. But your "pseudo-code" does not really map to anything I follow. Although I now understand what your RNG1 and RNG2 do (and that makes sense).

Usually, one attempts to do 1 of 2 things with such a program: (a) simulate results of a Bell test using a local model; (b) generate results that are more or less guaranteed to match Bell test results according to theory.

The first (a) is quite complicated to achieve. I can give you a reference to someone who has done this, or provide you with code that does. Please note that the results are in the eye of the beholder here, as such actually is not in accordance with quantum theory.

The second (b) is easier, but does not prove anything useful.

Can you be more clear as to what you hope to achieve?
 
  • #9
To be more clear: no one know what happens at an elementary level in the first place. All anyone knows is how to calculate probabilities of some specified outcomes given a particular setup. So I hope you see the difficulty in your questions. There is no known answer, though in some cases it is possible to model via computer program.

Take your notation:

Event T2:
L1= L1 x P1
L2= L2 x P2

The problem with this is that it implies 2 independent operations. That will not produce something in accordance with an actual experiement. You need something more like:

Event T2:
L1= L1 x P1
L2= L1 x P2

or

Event T2:
L1= L2 x P1
L2= L2 x P2

So that the correlations hold.
 
  • #10
DrChinese said:
Not all photons are detected at the polarizing beam splitter. Since there is coincidence counting via time window, that is resolved.

I guess then, only a single pair of photons is created for each measurement with some measurable time delay until the next pair is created?


We observe strong violation of Bell's inequality in an Einstein, Podolsky and Rosen type experiment with independent observers. Our experiment definitely implements the ideas behind the well known work by Aspect et al. We for the first time fully enforce the condition of locality, a central assumption in the derivation of Bell's theorem. The necessary space-like separation of the observations is achieved by sufficient physical distance between the measurement stations, by ultra-fast and random setting of the analyzers, and by completely independent data registration.

Could you also give me the numbers?

Wrong result= ?% - ?%
Correct result= ?% - ?%


25% vs 33%, sometimes it's about 75%, you know what I mean? What result do expect the code will produce?
 
  • #11
DrChinese said:
Can you be more clear as to what you hope to achieve?

Just to see what happens. Everyone complicates, I want to simplify.
 
  • #12
humbleteleskop said:
Just to see what happens. Everyone complicates, I want to simplify.

Nice try. It is already as simple as it can be.
 
  • #13
humbleteleskop said:
I guess then, only a single pair of photons is created for each measurement with some measurable time delay until the next pair is created?

That's the model. A pair of photons is created occasionally out of millions of photons coming into the PDC crystal. The closer in time they each member of a pair arrive relative to the other, the more likely they are to be an entangled pair. Usually the next pair arrives well after the first (perhaps a microsecond later). So it is reasonably easy to match into pairs by looking at timestamps for the detectors.
 
  • #14
DrChinese said:
Event T2:
L1= L1 x P1
L2= L1 x P2

That reads, photon L1 interacts with both polarizes P1 and P2, where photon L2 gets affected even if it was not involved in the interaction itself.

True or not, it's an arbitrary assumption from the practical perspective of the setup. I can not say photon L1 interacts with polarizer P2 since photon L1 is named "L1" exactly because it is the one supposed to interact with polarizer P1 and not P2.

I'm trying to describe it in objective practical terms, not theoretical. Whether some such interaction and correlation over distance exist, is the question the experiment is supposed to answer, not to be designed so it produces some particular answer.
 
  • #15
humbleteleskop said:
That reads, photon L1 interacts with both polarizes P1 and P2, where photon L2 gets affected even if it was not involved in the interaction itself.

True or not, it's an arbitrary assumption from the practical perspective of the setup. I can not say photon L1 interacts with polarizer P2 since photon L1 is named "L1" exactly because it is the one supposed to interact with polarizer P1 and not P2.

I'm trying to describe it in objective practical terms, not theoretical. Whether some such interaction and correlation over distance exist, is the question the experiment is supposed to answer, not to be designed so it produces some particular answer.

If you want to make up a model and see if it flies, that is fine. My point is that you can model it more like QM, or something that does not match the actual observed results (as you seem to be leaning).

Experimental results clearly support a model of one system containing 2 photons, not 2 separate systems of 1 photon each. A model of 2 independent photons will not be self-consistent (because of Bell). So there can be no L1 and L2 photons while they are in an entangled state, because they are indistinguishable. Once they are given a separate identity (as L1 or L2), they cease to be entangled. And it is arbitrary to say whether the observation of what becomes L1 affects L2, or vice versa. Time ordering makes no difference in this case.

The theoretical description is in fact the practical one, as well as an objective one. I know that because the experiment has already been run. :) In this area, theory leads practice.
 
  • #16
To be more specific: The EPR model held that the results of observations on entangled particles must be predetermined (since there are perfect correlations when the angle is the same). Do you envision that for your model? Because this is what Bell dismantled. (At least, when locality is assumed.)

And further: it is possible to entangle photons that have never existed in the same region of space time (ie have never interacted in the past). In fact, there is no requirement that they ever existed at the same time. So when you consider a model, you can ignore these things - but at the expense of getting a wrong prediction.
 
  • #17
DrChinese said:
If you want to make up a model and see if it flies, that is fine. My point is that you can model it more like QM, or something that does not match the actual observed results (as you seem to be leaning).

I'm trying to describe the experiment in the simplest terms possible, so to understand how it is performed. That this "description" will be able to run on a computer, is just a side-effect due to computers already speaking the same language I want to use.

What result do you think I will get? What result QM gets? 25%, 33%, 75%... can you explain a bit where do these percentages come from and what do they represent or describe?


Experimental results clearly support a model of one system containing 2 photons, not 2 separate systems of 1 photon each. A model of 2 independent photons will not be self-consistent (because of Bell). So there can be no L1 and L2 photons while they are in an entangled state, because they are indistinguishable. Once they are given a separate identity (as L1 or L2), they cease to be entangled. And it is arbitrary to say whether the observation of what becomes L1 affects L2, or vice versa. Time ordering makes no difference in this case.

The theoretical description is in fact the practical one, as well as an objective one. I know that because the experiment has already been run. :) In this area, theory leads practice.

When you describe this experiment in your words, do you really say L1 photon goes through the other polarizer P2 where photon L2 was supposed to go, or do you describe what was intended and expected by the setup according to classical physics?

I think the experiment should be described in terms of classical physics, in terms of what is "normally" intended and expected. Only when the results come out and we start to analyze them, only then would I go into some theory to explain the contradiction between "expected" and "obtained" results.
 
  • #18
humbleteleskop said:
I'm trying to describe the experiment in the simplest terms possible, so to understand how it is performed. That this "description" will be able to run on a computer, is just a side-effect due to computers already speaking the same language I want to use.

Typically: As you know, the P1 and P2 polarizing beamsplitters can be set to various relative rotational angles. The 2 outputs of each PBS has a photodetector present, and records either a +1 or -1 (or similar) depending on which detector fires. Usually just summarized as + or -. If the settings for P1 and P2 are set 60 or 120 degrees apart (doesn't matter which) and you feed entangled pairs of photons to P1/P2, you can match up the + and - stream that results. Obviously there is some noise and some situations where one of the pair is not detected too. There are also a few pairs that may arrive that have ceased to be entangled.

The results will be something close to a 25% match rate for Type I PDC, and something close to 75% for Type II PDC. I find it convenient to discuss Type I PDC as the math is a bit simpler. The match rate (theoretical) is cos^(theta) where theta is the angle between P1 and P2, ie P1-P2 or P2-P1. Another result is that you get 100% match when the angle is the same for any P1 and P2, ie P1=P2 and theta is 0. And lastly, the result you see looking at the P1 (or P2) stream individually should be random or at least appear so to the eye.

So you will quickly see that a computer algorithm to achieve this description is (nearly*) impossible without having the result at one detector depend on the setting of the other (either directly or indirectly). In order to get the so-called perfect correlations, you need to give your hypothetical L1 and L2 some properties that yield + or - values at any specific angle. But you will have difficulty making the statistics work out as you rotate P1 and P2 through 360 degrees, holding the difference theta constant.

*The "nearly" part leads to a very complex and contentious discussion that is outside the scope of this thread.
 
  • #19
humbleteleskop said:
I think the experiment should be described in terms of classical physics, in terms of what is "normally" intended and expected. Only when the results come out and we start to analyze them, only then would I go into some theory to explain the contradiction between "expected" and "obtained" results.

Each person is free to take their own path. But don't expect your path to have any particular priority over others. Bell already knew what to expect in the way of statistics even before an experiment was run because he knew the basic science of QM. He started with some very simple "boolean/classical" ideas (but not a model) and tried to reconcile with those quantum predictions. He never needed to explain the difference between expected and obtained results because there weren't any. He just rejected the classical assumption, as does most anyone studying the area.
 
  • #20
DrChinese said:
Typically: As you know, the P1 and P2 polarizing beamsplitters can be set to various relative rotational angles.

I didn't think P1 and P2 polarizers are beam-splitters. I thought they are two pieces of transparent plastic with certain polarization angle, and that they can rotate around axis that is parallel to trajectories of incoming photons. I thought the beam-splitter is the source of photon pairs.


The 2 outputs of each PBS has a photodetector present, and records either a +1 or -1 (or similar) depending on which detector fires. Usually just summarized as + or -.

Detectors record +1 or -1, depending on what?


If the settings for P1 and P2 are set 60 or 120 degrees apart

What happens if angles are 70, 110, 30, and 45 degrees apart?


(doesn't matter which) and you feed entangled pairs of photons to P1/P2, you can match up the + and - stream that results.

You mean in that case for every photon pair both detectors will either record +1, or both record -1? If both are +1, or both are -1, then CORRELATED counter increases by one? If they are not the same CORRELATED counter stay unchanged?

Aren't readings supposed to correlated (two of the same) when angles are the same?


The results will be something close to a 25% match rate for Type I PDC

That's puzzling to me. Photon L1 is always correlated with L2 when P1 and P2 polarizer angles are the same? So, if RNG guarantees 1/3 of the time the angles will be the same, how can you possibly get less than 33%?

Does any of those experiments independently records, directly from RNG, what angles were actually chosen for each measurement to ensure RNG was indeed fair and converged to 33.33% in itself by dealing proportional number of all three possible angles?


So you will quickly see that a computer algorithm to achieve this description is (nearly*) impossible without having the result at one detector depend on the setting of the other (either directly or indirectly). In order to get the so-called perfect correlations, you need to give your hypothetical L1 and L2 some properties that yield + or - values at any specific angle. But you will have difficulty making the statistics work out as you rotate P1 and P2 through 360 degrees, holding the difference theta constant.

So the first step is to achieve results described by deterministic/classical predictions. Then I can play with it and try to simulate different sources of error or uncertainty to see how and how much results can be impacted by such factors.

Is it not concerning such important conclusions depend on the "correct" result which is so close to the "wrong" result, 25% vs 33%? Aren't they a little bit too close for comfort, especially since we involved random factors in the game? How close QM experiments get to 25%? What if I get 29%, how spooky is that, a little bit spooky, or not spooky at all?
 
  • #21
humbleteleskop said:
Is it not concerning such important conclusions depend on the "correct" result which is so close to the "wrong" result, 25% vs 33%? Aren't they a little bit too close for comfort, especially since we involved random factors in the game?

They aren't that close. It's the difference between being surrounded by a bunch of people who average six feet tall and a bunch of people who average eight feet tall - you will be able to determine with some confidence which it is if you have a large number of samples.
 
  • #22
Actual C/C++ algorithm

Code:
#include <time.h>

#define A -30
#define B  0
#define C +30

int RN_GEN(int a1, int a2, int a3)
{
    int i= rand()%3;
    if(i == 0) return a1;
    if(i == 1) return a2;
    if(i == 2) return a3;
}

void main()
{
Init_Setup:;
srand (time(NULL));
int         N_MEASURE= 0;
int         N_REPEAT= 1000;
int         CORRELATED= 0;
int         P1= 0;
int         P2= 0;


BEGIN:;
        //Event_T0
        int L1= 1;
        int L2= 1;

        //Event_T1
        P1= RN_GEN(A, B, C);
        P2= RN_GEN(A, B, C);

        //Event_T2
        L1*= P1;
        L2*= P2;

        //Event_T3
        printf("%d: ", N_MEASURE);
        if (L2 == L1)
        {
            CORRELATED++;
            printf("SAME\n");
        }
        else
            printf("-xx-\n");

        N_MEASURE++;
        if (N_MEASURE < N_REPEAT) goto BEGIN;

printf("\nRESULT: %d%%", CORRELATED/(N_MEASURE/100));
printf("\n\nPress a key to repeat.");
getch(); goto Init_Setup;
}


Results:
100 measurements: 24% - 42%
1000 measurements: 30% - 35%
10000 measurements: 32% - 33%
 
  • #23
humbleteleskop said:
1. I didn't think P1 and P2 polarizers are beam-splitters. I thought they are two pieces of transparent plastic with certain polarization angle, and that they can rotate around axis that is parallel to trajectories of incoming photons. I thought the beam-splitter is the source of photon pairs.

2. What happens if angles are 70, 110, 30, and 45 degrees apart?

3. That's puzzling to me. Photon L1 is always correlated with L2 when P1 and P2 polarizer angles are the same? So, if RNG guarantees 1/3 of the time the angles will be the same, how can you possibly get less than 33%?

4. Does any of those experiments independently records, directly from RNG, what angles were actually chosen for each measurement to ensure RNG was indeed fair and converged to 33.33% in itself by dealing proportional number of all three possible angles?

5. So the first step is to achieve results described by deterministic/classical predictions. Then I can play with it and try to simulate different sources of error or uncertainty to see how and how much results can be impacted by such factors.

6. Is it not concerning such important conclusions depend on the "correct" result which is so close to the "wrong" result, 25% vs 33%? Aren't they a little bit too close for comfort, especially since we involved random factors in the game? How close QM experiments get to 25%? What if I get 29%, how spooky is that, a little bit spooky, or not spooky at all?

Whoa, partner! You claimed you wanted simple, so I have tried to make it as simple as possible. Here are a few comments:

1. Usually polarizing beam splitters are used in the more sophisticated setup, however polarizing filters will work too. Having both modes (from a PBS) gives higher efficiency/tighter results.

2. The formula is as I provided, cos^2(theta) for PDC I. That tells you what to expect. The 33% varies with the model, but no local realistic model goes below that.

3. You ask how? There was no big surprise to this, as it was predicted by the quantum model. Nature does not follow classical rules. At any rate, the result are different than your model.

4. When angles are being changed on the fly, the angles are recorded. Not all Bell experiments change the angles on the fly. There are hundreds of different setups depending on what is being probed.

As a practical matter: most Bell tests do not use the 25% vs 33% standard for ruling out classical theories. Instead they use what is called the CHSH inequality. You can read about it in the references. It has an upper bound of 2.00 for classical (local realistic) theories, so predicts something below 2.00. That maps to the 33%. The quantum theoretical max is about 2.82. Bell tests usually give a value between 2.25 and 2.40.

Further, the CHSH use 4 angle combinations. You can read about that too.

5. I see you have done just that with your code snippet. Will comment on that separately.

6. As Nugatory has commented, that is plenty enough to distinguish. In fact, some Bell tests have demonstrated violations of local realism by over 100 standard deviations. The Weihs test was about 30 SD IIRC.

I strongly suggest you read the Weihs et al reference, or the following which is intended for college level physics students:

http://arxiv.org/abs/quant-ph/0205171

"We use polarization-entangled photon pairs to demonstrate quantum nonlocality in an experiment suitable for advanced undergraduates. The photons are produced by spontaneous parametric down conversion using a violet diode laser and two nonlinear crystals. The polarization state of the photons is tunable. Using an entangled state analogous to that described in the Einstein-Podolsky-Rosen ``paradox,'' we demonstrate strong polarization correlations of the entanged photons. Bell's idea of a hidden variable theory is presented by way of an example and compared to the quantum prediction. A test of the Clauser, Horne, Shimony and Holt version of the Bell inequality finds S=2.307±0.035, in clear contradiciton of hidden variable theories. The experiments described can be performed in an afternoon."
 
  • #24
humbleteleskop said:
Results:
100 measurements: 24% - 42%
1000 measurements: 30% - 35%
10000 measurements: 32% - 33%

Looks about right. Well done.
 
  • #25
DrChinese said:
I find it convenient to discuss Type I PDC as the math is a bit simpler. The match rate (theoretical) is cos^(theta) where theta is the angle between P1 and P2, ie P1-P2 or P2-P1. Another result is that you get 100% match when the angle is the same for any P1 and P2, ie P1=P2 and theta is 0.

Isn't that just Malus's law? About the number of photons, where some pass through, and some get stopped at the polarizer, depending on their relative polarization angle. How does that tell us anything about photon pairs correlation?


And lastly, the result you see looking at the P1 (or P2) stream individually should be random or at least appear so to the eye.

A: 1 0 0 1 0
B: 1 0 0 1 0
Correlated: 100% ?


A: 1 0 0 1 0
B: 0 1 1 0 1
Correlated: 0% ?

If A and B sequence is exactly opposite, aren't the readings actually still 100% correlated?
 
  • #26
humbleteleskop said:
1. Isn't that just Malus's law? About the number of photons, where some pass through, and some get stopped at the polarizer, depending on their relative polarization angle. How does that tell us anything about photon pairs correlation?




A: 1 0 0 1 0
B: 1 0 0 1 0
Correlated: 100% ?


A: 1 0 0 1 0
B: 0 1 1 0 1
Correlated: 0% ?

2. If A and B sequence is exactly opposite, aren't the readings actually still 100% correlated?

1. It does look to be the same, but that is something of a coincidence. When you run through the quantum formalism, that is the result for entangled photons. So it does describe the entangled state. See formulae 1, 2, 3 in the Dehlinger reference which makes this clear.

2. Type I go one way, type II the other. You of course adjust for that.
 
  • Like
Likes 1 person
  • #27
DrChinese said:
3. You ask how? There was no big surprise to this, as it was predicted by the quantum model. Nature does not follow classical rules. At any rate, the result are different than your model.

What I don't respond to I consider answered, thank you. If we are not talking about the same thing I hope you will see through that because I don't think I can.


* P1 and P2 set to same angle -> L1 and L2 ALWAYS correlated

* RNG guarantees 33% same angle

This is a promise *at least* 33% of the time there will be correlation. Those above are ones of very few constants we have in this experiment, something we are supposed to trust and rely on, a basic calibration setup. If we don't get *at least* 33%, then we can not make those starting claims in the first place.


Also, what sense does it make to prove "correlation" based on getting LESS correlated particles, instead of getting more correlated particles out of the experiment?


The 2 outputs of each PBS has a photodetector present, and records either a +1 or -1 (or similar) depending on which detector fires. Usually just summarized as + or -.

How a detector knows whether it is +1 or -1, what does it measure?
 
Last edited:
  • #28
humbleteleskop said:
* P1 and P2 set to same angle -> L1 and L2 ALWAYS correlated

* RNG guarantees 33% same angle

This is a promise *at least* 33% of the time there will be correlation. Those above are ones of very few constants we have in this experiment, something we are supposed to trust and rely on, a basic calibration setup. If we don't get *at least* 33%, then we can not make those starting claims in the first place.

Yikes, you arrived at 33% a COMPLETELY different way than I did. Sorry, I didn't look at the detail of your code. I only count trials in which theta is 60 or 120 degrees different. The QM prediction for that is 25%.

Any local realistic model cannot give a result lower than the 33% when the angles are different (per above) if they are to be consistent (rotationally) and give a 100% correlation when the angle is the same. That result I assumed you knew from other reading. My bad. Have you looked at any of my web pages discussing Bell? That may help. This one explains the 33% lower limit on local realistic theories.

http://drchinese.com/David/Bell_Theorem_Easy_Math.htm
 
Last edited:
  • #29
DrChinese said:
2. Type I go one way, type II the other. You of course adjust for that.

Data stream 1
A: 0 1 0 0 1 0 1 1 0 1
B: 0 1 0 0 1 0 1 1 0 1

Data stream 2
A: 0 1 0 0 1 0 1 1 0 1
B: 1 0 1 1 0 1 0 0 1 0


So if Type I experiment counts only matching values, it concludes the first sequence is 100% correlated and the second sequence 0% correlated, thus failing to recognize the second sequence is actually 100% correlated as well? It seems they should really be sampling and analyzing data over larger sequences, not just making one to one comparison.
 
  • #30
I think I see a new law of physics, new to me at least.

Humble's 1st Law of Binary Sequences
- two binary sequences can never be more than 50% uncorrelatedEDIT:
The moment where "matching pairs correlation" drops below 50%, is the same moment where "opposite pairs correlation" raises above 50%.
 
Last edited:
  • #31
humbleteleskop said:
Data stream 1
A: 0 1 0 0 1 0 1 1 0 1
B: 0 1 0 0 1 0 1 1 0 1

Data stream 2
A: 0 1 0 0 1 0 1 1 0 1
B: 1 0 1 1 0 1 0 0 1 0


So if Type I experiment counts only matching values, it concludes the first sequence is 100% correlated and the second sequence 0% correlated, thus failing to recognize the second sequence is actually 100% correlated as well? It seems they should really be sampling and analyzing data over larger sequences, not just making one to one comparison.

At this point you are so lost...

You can measure any values you like in an experiment of your own making. I really can't help you on that. The rest of us have standard setups to discuss and I have given you any number of those. Everyone knows the difference between matches and correlation. And anti-correlation too, which is what you are mentioning. None of this has anything to do with a Bell test beyond what I have explained so far.

The issue is the average relationship between pairs of entangled particles. In quantum theory, that relationship remains in place as long as the pair is described by a single wave function. In classical theory, that relationship ends when they are no longer in causal contact because classical theories follow local realism.

I will again ask you to read the references I have provided. When you understand those, many of your questions will be resolved. Your idea that the scientists working this area do not understand the basics of their experiments (and are comparing wrong sequences) is... well, what do you really expect?

Learn the background, THEN try to tear it apart. Not the other way around.
 
  • #32
humbleteleskop said:
I think I see a new law of physics, new to me at least.

Humble's 1st Law of Binary Sequences
- two binary sequences can never be more than 50% uncorrelatedEDIT:
The moment where "matching pairs correlation" drops below 50%, is the same moment where "opposite pairs correlation" raises above 50%.

When measuring and reporting "correlations", the usual standard is:

1 means fully correlated.
-1 means fully anti-correlated.
0 means neither, or essentially random.
 
  • #33
DrChinese said:
When measuring correlations:

1 means fully correlated.
-1 means fully anti-correlated.
0 means neither, or essentially random.

How detector knows whether it is +1 or -1? , what does it measure?

How detector knows it's 0, what does it measure, or how it calculates?


The issue is the average relationship between pairs of entangled particles. In quantum theory, that relationship remains in place as long as the pair is described by a single wave function. In classical theory, that relationship ends when they are no longer in causal contact because classical theories follow local realism.

Type I experiment counts and records both matching pairs and opposite pairs?

How is that average correlation calculated, do you know the formula?
 
  • #34
DrChinese said:
Yikes, you arrived at 33% a COMPLETELY different way than I did. Sorry, I didn't look at the detail of your code. I only count trials in which theta is 60 or 120 degrees different. The QM prediction for that is 25%.

If I don't count correlations when P1 and P2 angles are the same, then I get this:

100 measurements 18% - 27%
1000 measurements 20% - 24%
10000 measurements 21% - 22%


New algorithm, changes in bold:

Code:
Event T0:
L1= 1
[B]L2= -1[/B]

Event T1:
P1= RNG1(A, B, C)
P2= RNG2(A, B, C)

Event T2:
L1= L1 x P1
L2= L2 x P2

Event T3:
[B]if P1 == P2 goto SKIP:[/B]
if L2 == L1 then CORRELATED++
[B]SKIP:[/B]
N_MEASURE++
RESULT= CORRELATED/(N_MEASURE/100)
if N_MEASURE < N_REPEAT goto Event T0
 
  • #35
humbleteleskop said:
How detector knows whether it is +1 or -1? , what does it measure?

How detector knows it's 0, what does it measure, or how it calculates?

Type I experiment counts and records both matching pairs and opposite pairs?

How is that average correlation calculated, do you know the formula?

Type I photons have the same polarization. Type II pairs have orthogonal (perpendicular) polarization. Type has no bearing on whether what you decide to record, that is part of the rest of the setup.

A Polarizing Beam Splitter (PBS) allows horizonally (H) polarized photons straight through, and deflects vertical (V) ones through some angle specific to the PBS. Keep in mind the H and V designations are relative. The PBS can itself be rotated. The important thing is that you can orient the PBS so you can split the input stream in such a way as photons emerge with a known polarization. You can label it as makes sense.

The PBS will have photodetectors at each of the 2 output channels. When it fires, the detector and time is recorded. Obviously, you can consider that an H, V, +, -, 0, 1, -1 or whatever as long as it is consistent. Different people tend to use different notation.

I usually talk about match % because it is easier to discuss and relate to the cos^2 formula. Then the formula is:

Matches/(Matches+NonMatches) where hits that cannot be paired are ignored. In real experiments, unpaired hits are usually noted and discussed.
 

Similar threads

Replies
13
Views
2K
  • Quantum Physics
2
Replies
47
Views
4K
Replies
75
Views
8K
  • Quantum Physics
3
Replies
87
Views
5K
Replies
4
Views
1K
Replies
18
Views
1K
  • Quantum Physics
Replies
17
Views
1K
Replies
8
Views
2K
Replies
56
Views
5K
Replies
53
Views
8K
Back
Top