EPR Debate: Nature Agrees with Einstein

  • Thread starter Thread starter JohnBarchak
  • Start date Start date
  • Tags Tags
    Epr
Click For Summary
The discussion centers on the EPR debate between Einstein and Bohr regarding the nature of quantum entanglement and measurement. Einstein argues that photons have definite polarizations from creation, while Bohr contends they exist in a superposition until measured. Recent experiments, particularly those related to quantum key distribution (QKD), suggest that the observed correlations align with quantum mechanics predictions, challenging Einstein's views. Participants express skepticism about the interpretations of experimental results, emphasizing the need for higher quantum efficiency in detectors to clarify the debate. Ultimately, the conversation highlights ongoing tensions between classical and quantum interpretations of reality.
  • #91
vanesch said:
And indeed, theoretically such a possibility must be considered for a "loophole free" test. So this is the famous "efficiency loophole".

cheers,
Patrick.

Vanesch,

I freely acknowledge the "detection loophole" and the "fair sampling" assumption (see more on that, next post, reference probably stolen from you already anyway). The question, don't you think, is what is its significance?

We could also have a leap year loophole. Scientific experiments run in a leap year give different results than other years. Or if run in the Southern hemisphere. Or in *France*, for god's sake. :smile: The point is, why is it that only EPR tests should have such loopholes heaped upon them? Why not double slit experiments, etc. etc.

This same thing - creating ad hoc evidentiary requirements - is also done to evolutionary theory. (Next we will be hearing about "intelligent design" in EPR experiments.) I guess, to be fair, special relativity (and the "one way speed of light" controversy) gets some of the same heaped upon it.

It seems to me that improvements in technology (and therefore leading to experiments with greater accuracy) render such "loopholes" as rapidly approaching moot status. I guess the great thing about science is that nothing is ever quite 100.0000% settled.
 
Last edited:
Physics news on Phys.org
  • #92
JohnBarchak said:
With 10% efficiency on photon detection, the total inability to determine how many photons are involved, and the total inability to determine the energy involved, it is absolutely amazing that anyone concluded anything. I do not blame Aspect; he was not the one making the incredible claims.

All the best
John B.

You are way behind:

Experimental violation of a Bell's Inequality with efficient detection
M. A. Rowe, et al
Nature, vol 409, February 2001

CHSH value of 2.25+/-.03 where 2.00 is the max allowed by local realistic theories. "...the high detection efficiency of our apparatus eliminates the so-called detection loophole. ... The result above was obtained using the outcomes of every experiment, so that no fair sampling hypothesis was required."

By the way, there is nothing weird about drawing strong conclusions from small samples. There is a branch of science called "statistics" ... you might learn from its study!
 
Last edited:
  • #93
DrChinese said:
We could also have a leap year loophole. Scientific experiments run in a leap year give different results than other years. Or if run in the Southern hemisphere. Or in *France*, for god's sake. :smile: The point is, why is it that only EPR tests should have such loopholes heaped upon them? Why not double slit experiments, etc. etc.

This same thing - creating ad hoc evidentiary requirements - is also done to evolutionary theory. (Next we will be hearing about "intelligent design" in EPR experiments.) I guess, to be fair, special relativity (and the "one way speed of light" controversy) gets some of the same heaped upon it.

Exactly. That's why I stopped discussing with these people ; we have two fundamentally different ways of viewing of how science works. Science works (in my opinion, which is, I think, well-informed on the question) on two requirements:
1) logical consistency and sufficient generality of a theory (which is essentially a way of mapping "things in the lab" onto a mathematical model, within boundaries, but not tied to specific lab situations ad hoc)
2) agreement between numerical predictions by said theory in given circumstances describing and experiment and the actual outcomes of those experiment.

The logical consequence is that you can disprove specific theories (either because they don't satisfy 1) or because they fail on 2)), or even specific classes of theories which is just a loop over 1) and 2). But you can never PROVE a theory, nor can you disprove one particular aspect of what could be contained in 1).

And it is exactly that what many people try to do, or accuse "scientists" to fail to do. No, you cannot *absolutely prove* QM, or the *existence of photons*. You can only show that theories using this make accurate predictions and satisfy up to date 1) and 2), and compare this to other theories which satisfy also 1) (such as classical optics). You cannot prove the inexistence of LR theories if the class is too wide.

What they should do, in order to be taken seriously, is to produce specific theories, or classes of theories, that contain their pet principle, and satisfy 1) and 2). But they rarely (if ever) do.

They just say: hey, for _this_ specific experimental result, I can think up a theory that respects my pet principle and produces the same results - if I'm allowed to change the behaviour of all known apparatus. But for the next experiment, they do the same, but with DIFFERENT theories and different behaviour of the apparatus. This means that their view doesn't satisfy 1).

The nicest attempts that I've seen were "stochastic Electrodynamics". I think it has a problem with thermodynamics, and with the rest of quantum theory, but at least it tried to construct an equivalent theory in optics having LR.

cheers,
Patrick.
 
  • #94
DrChinese said:
You are way behind:

Experimental violation of a Bell's Inequality with efficient detection
M. A. Rowe, et al
Nature, vol 409, February 2001"

YOU are the one who brought up Aspect!
 
  • #95
MY "Classical view" of the photon as a particle
vanesch said:
But is it EXACTLY to such a situation that Bell applies, and this is exactly quantum mechanics ! Indeed, the photons that pass have the polarization of the last filter. .
So if my "Classical view of a particle" is QM, can you help me understand the mechanics of how QM explains a photon changes its polarization as it goes through a filter?

ALSO
For instance, we know that the polarizations are "opposite" in the two photons of a photon pair. So if one polarizer is horizontal, and let's pass, and the other one is vertical, it should let pass. "100% correlation"
Patrick.
Just to confirm a point (maybe only important to the testers) but somewhere I'd picked up the idea that entangled photons came out with the same polarization. Testers must set there 0 degeres mark for correlation in test areas A and B 90 degrees apart from each other, correct?

Thanks, I think I'm getting it.
 
  • #96
RandallB said:
1. MY "Classical view" of the photon as a particle
So if my "Classical view of a particle" is QM, can you help me understand the mechanics of how QM explains a photon changes its polarization as it goes through a filter?

2. ALSOJust to confirm a point (maybe only important to the testers) but somewhere I'd picked up the idea that entangled photons came out with the same polarization. Testers must set there 0 degeres mark for correlation in test areas A and B 90 degrees apart from each other, correct?

Thanks, I think I'm getting it.

1. Vanesch can probably answer this better. As far as I know, there is no real "mechanical" explanation of spin intrinsics. It just is. This is one of the elements of QM that some find objectionable. By analogy, it is no different than what happens when an electron moves from one orbital to another. It just does.

2. I always refer to 0 degrees as the correlated case because it is easier to discuss. Actually the spins are crossed - i.e. 90 degrees apart - but that is easily compensated for as you state.
 
  • #97
JohnBarchak said:
YOU are the one who brought up Aspect!

?

OK, here is another simple question for you to evade: do you or do you not accept the results of Aspect as proof of a violation of Bell's Inequality?

If the answer is NO, then: do you or do you not accept the results of Rowe as proof of a violation of Bell's Inequality?
 
  • #98
RandallB said:
MY "Classical view" of the photon as a particle
So if my "Classical view of a particle" is QM, can you help me understand the mechanics of how QM explains a photon changes its polarization as it goes through a filter?

Because in quantum mechanics, an x-polarized photon can just as well be seen as a superposition of a 45 degree left, and a 45 degree right polarized photon. And then only one of the two components gets true. It is not that there is a mechanistic explanation of something "tilting" the axis of polarization. In QM formalism, it is just a "change of basis". But you touch indeed upon one of the most bizarre properties of QM. In fact, all these things are different expressions of THE bizarre property of QM, namely the superposition principle. And it is the cornerstone of QM which gives rise to about all of its results.
However, in this particular case, the analogy with classical optics is striking: you wouldn't argue that something "tilted" the plane of the E-field when it went through a polarizer, right ? Well, exactly the same thing applies to the photon.

ALSOJust to confirm a point (maybe only important to the testers) but somewhere I'd picked up the idea that entangled photons came out with the same polarization. Testers must set there 0 degeres mark for correlation in test areas A and B 90 degrees apart from each other, correct?

In fact, both occur. Some parametric down converters are of type I, and then indeed, both have the same polarization. Others are of type II, and then they are perpendicularly polarized. (or was it the opposite?).

cheers,
Patrick.
 
  • #99
DrChinese said:
... do you or do you not accept the results of Rowe as proof of a violation of Bell's Inequality?
I think it is in http://arxiv.org/abs/quant-ph/0102139 that Lev Vaidman explains why, though an inequality was violated, we don't have to interpret it as illustrating entanglement: the ions in Rowe's experiment were very close together. The measurements on them could not be considered (as required for the Bell inequality) to be independent.

Cat
 
  • #100
Cat said:
I think it is in http://arxiv.org/abs/quant-ph/0102139 that Lev Vaidman explains why, though an inequality was violated, we don't have to interpret it as illustrating entanglement: the ions in Rowe's experiment were very close together. The measurements on them could not be considered (as required for the Bell inequality) to be independent.

Cat

Lev doesn't think that experiment has eliminated the detection ("fair sampling") issue because of the locality issue. But that is merely one person's opinion.

It is clear to me from the Rowe and Aspect experiments (and others such as Weihs):

a. Locality: the Inequality is violated whether or not the apparatus is space-like separated, per Weihs. (Requiring this never made any sense in the first place, because it requires the existence of physical effects never otherwise witnessed.)

b. Fair Sampling: the Inequality is violated whether or not a fair sample is obtained, per Rowe. (Requiring a complete sample never made sense to me either, as a large subsample should not possibly show more correlations than are actually present in the full population.)

Combining these two, you know that locality and sampling are not factors in the correlated events. That should be sufficient to address the lingering doubts of most scientists.
 
  • #101
Cat said:
I think it is in http://arxiv.org/abs/quant-ph/0102139 that Lev Vaidman explains why, though an inequality was violated, we don't have to interpret it as illustrating entanglement: the ions in Rowe's experiment were very close together. The measurements on them could not be considered (as required for the Bell inequality) to be independent.

Cat

While this may not be entirely relevant to the point you're trying to make, take note that the existence of entanglement is not solely verified via the EPR-type experiments. 2 entangled photons, for example, are not separable and can essentially be described as a connected, single system. It means that it is a macro particle with twice the energy of a single, isolated photon. If this is true, then one should be able to do a diffraction experiment with a higher resolution using the entangled pair than with single photon since the entangled macro particle has twice the energy (and thus, half the wavelength) of a single, unentangled photon.

Guess what? That's what has been observed, and in two separate experiments![1,2] The entangled photons can beat the diffraction limits of single photons. The first experiment showed interference patterns from a state of 3 entangled photons, while the other showed a state of 4 entangled photons. In both cases, the resolution was better than for a single photon: they were lambda/3 and lambda/4 respectively, as expected.

One can read a brief report of these experiements here

http://physicsweb.org/article/news/8/5/6

As is typical in physics, a particular idea, principle, or theory, isn't verified with just one experiment. Often, several difference experiments and techniques are required for a convincing verification. This appears to be the case here.

Zz.

[1] P. Walther et al., Nature v.429, p.158 (2004).
[2] M.W. Mitchell et al., Nature v.429, p.161 (2004).
 
  • #102
ZapperZ said:
One can read a brief report of these experiements here

http://physicsweb.org/article/news/8/5/6

As is typical in physics, a particular idea, principle, or theory, isn't verified with just one experiment. Often, several difference experiments and techniques are required for a convincing verification. This appears to be the case here.

Zz.

[1] P. Walther et al., Nature v.429, p.158 (2004).
[2] M.W. Mitchell et al., Nature v.429, p.161 (2004).


Hi ZapperZ,

this article is very interesting. I do have a question though because this experiment is not entirely clear to me. Let me explain how i see it : if two photons are entangeled their wavelength is indeed half the size of one photon. These photons can be entangeled via parametric down conversion. In order to check whether photons are entangeled can you do this ? : Suppose you construct a plate through which the photons have to pass. Make little holes in this plate so that photons can pass through them. Now (according to me, but i am not sure) the clue is to make the dimension of these holes as big as the wavelength of an entangeled pair so that "ordinary" unentangeled photons cannot pass through (their wavelength is too big)

Hence, the photons you detect after passing through the plate are entagneled for sure.

Is this the way to look at this experiment and does my point make sense?

Please elaborate if i am wrong...

Thanks in advance.

regards
marlon
 
Last edited:
  • #103
DrChinese said:
Lev doesn't think that experiment has eliminated the detection ("fair sampling") issue because of the locality issue. But that is merely one person's opinion.
Surely one experiment with perfect detectors has no effect on the logic of a loophole that is present when they are not perfect? [See the original paper on the subject of the detection loophole -- Pearle, P, “Hidden-Variable Example Based upon Data Rejection”, Physical Review D, 2, 1418-25 (1970)]

It is clear to me from the Rowe and Aspect experiments (and others such as Weihs):

a. Locality: the Inequality is violated whether or not the apparatus is space-like separated, per Weihs. (Requiring this never made any sense in the first place, because it requires the existence of physical effects never otherwise witnessed.)
I agree that, in general, the separation is irrelevant. However see below for more on the Rowe et al experiment.

b. Fair Sampling: the Inequality is violated whether or not a fair sample is obtained, per Rowe. (Requiring a complete sample never made sense to me either, as a large subsample should not possibly show more correlations than are actually present in the full population.)
No, this has not been proven and the logic of Pearle's paper says it is not in general true. I've had a look at one of the papers on Rowe's experiment, and it is open to more problems than have so far been discussed. It is not only a straightforward matter of whether or not signals could have been exchanged (the locality loophole) and whether or not the sample was fair (the detection loophole). We agree, it seems, that the first is open but irrelevant, the second closed (since the sample was almost the entire population).

The problem here seems to be that Bell's inequality depends on being able to set your detectors independently and also on being able to measure your particles separately. It is not at all clear that the detector settings can be regarded as independent, and it is most certainly not true that the particles are measured separately. What is measured is the intensity of the combined signal from both. There is no way, when this is at half strength, of knowing which particle contibuted that half -- See Kielpinski, David et al, “Recent Results in Trapped-Ion Quantum Computing”, http://arxiv.org/abs/quant-ph/0102086.

Cat
 
Last edited:
  • #104
Anybody here whot can help me out with the question in my previous post on detecting entangeled photons ?

Thanks
marlon
 
  • #105
marlon said:
Hi ZapperZ,

this article is very interesting. I do have a question though because this experiment is not entirely clear to me. Let me explain how i see it : if two photons are entangeled their wavelength is indeed half the size of one photon. These photons can be entangeled via parametric down conversion. In order to check whether photons are entangeled can you do this ? : Suppose you construct a plate through which the photons have to pass. Make little holes in this plate so that photons can pass through them. Now (according to me, but i am not sure) the clue is to make the dimension of these holes as big as the wavelength of an entangeled pair so that "ordinary" unentangeled photons cannot pass through (their wavelength is too big)

Hence, the photons you detect after passing through the plate are entagneled for sure.

Is this the way to look at this experiment and does my point make sense?

Please elaborate if i am wrong...

Thanks in advance.

regards
marlon

Keep in mind that to get any effects from diffraction, you need a opening that is of the order of, or less than the size of the wavelength. If the opening is considerably larger than the wavelength, you get no diffraction effects.

What you get in this case is that to get a diffraction pattern with the 2-photon case, the opening must be smaller than what you get with the 1-photon case. This is because the wavelength of the 2-photon macro particle is smaller than the single photon. So this is in the opposite direction of what you are describing.

Zz.
 
  • #106
ZapperZ said:
Keep in mind that to get any effects from diffraction, you need a opening that is of the order of, or less than the size of the wavelength. If the opening is considerably larger than the wavelength, you get no diffraction effects.

What you get in this case is that to get a diffraction pattern with the 2-photon case, the opening must be smaller than what you get with the 1-photon case. This is because the wavelength of the 2-photon macro particle is smaller than the single photon. So this is in the opposite direction of what you are describing.

Zz.

Yes, thanks a lot Zz, i got the picture now. Indeed what i stated previously is the exact opposite of what is going on. I realize that now.


regards
marlon
 
  • #107
marlon said:
Hi ZapperZ,

this article is very interesting. I do have a question though because this experiment is not entirely clear to me. Let me explain how i see it : if two photons are entangeled their wavelength is indeed half the size of one photon. These photons can be entangeled via parametric down conversion. In order to check whether photons are entangeled can you do this ? :


This article certainly didn't go unnoticed. I remember reading it when
it came out. I haven't seen any further details yet unfortunately.

The cost of the lens optics in semiconductor lithography waver processing
equipment goes up from $3 million to 6$ million if the laser wavelength is
reduced from 193 nm to 153 nm.

So you can imagine that any help in breaking the diffraction limit is really
appreciated! There is already a big bag of tricks used to break this limit.
Optical Proximity Correction, Phase Shift Masks and Liquid Immersion will
make it possible to draw 32 nm wide lines with 193 nm laser light in
production systems at the end of the decade.


marlon said:
Suppose you construct a plate through which the photons have to pass. Make little holes in this plate so that photons can pass through them. Now (according to me, but i am not sure) the clue is to make the dimension of these holes as big as the wavelength of an entangeled pair so that "ordinary" unentangeled photons cannot pass through (their wavelength is too big)

Hence, the photons you detect after passing through the plate are entagneled for sure.

It has been shown that light can pass through holes which are much
smaller than its wave length. Presumably by interacting with the electrons
in the material of the plate.

What about simple color filters?


Regards, Hans
 
  • #108
vanesch said:
an x-polarized photon can just as well be seen as a superposition of a 45 degree left, and a 45 degree right polarized photon. And then only one of the two components gets true"through".
The term "x-polarized" dosn't mean "any" polarized photon does it? and would not include a V "|" or a H "-" photon would it?
Superposition of a V & H would be "+"
And if using a non perpendicular converter (call it type I) would the superposition types be discribed as "||", "//", "\\", "--"?

However, in this particular case, the analogy with classical optics is striking: you wouldn't argue that something "tilted" the plane of the E-field when it went through a polarizer, right ? Well, exactly the same thing applies to the photon.
Patrick.
Based on the fact that light goes through a set of H D V filters don't we have to argue that something somehow does tilt or turn and change the polarization angle??
Randall B
 
  • #109
Can an experimental test of entanglement ever be considered complete?

Cat said:
The problem here seems to be that Bell's inequality depends on being able to set your detectors independently and also on being able to measure your particles separately. It is not at all clear that the detector settings can be regarded as independent, and it is most certainly not true that the particles are measured separately.[/url].

Cat

I do not agree with this statement, and I don't think it is represented in Bell's Theorem. I realize that locality is represented in the Theorem, I am not questioning that.

1. Suppose we set the polarizers at 22.5 degrees for one hour, and measure many event correlations. I am not concerned that the polarizer at A sent a message to the polarizer at B. I believe this is a red herring that confuses the locality issue. The observed correlation will be .8536 (ideal case of course) and we can be happy with that value as being valid in agreement with the predictions of the QM formalism. What underlying mechanism accomplishes this is totally irrelevant to this experimental result, just as the mechanism of the double slit is not relevant to that result. In each case, the Heisenberg Uncertainty Principle is respected. That's QM!

2. If (during its measurement) the particle at A sends a message to the particle at B via a non-local communication channel which is invisible to us, then that could explain the observed results (non-local realistic). This is true whether the measuring apparatus is space-like separated or not! This is where the assumption of locality fits into the Theorem. In other words: if you are postulating a non-local theory containing hidden variables, then Bell's Theorem does not apply at all.

3. On the other hand, the mathematical formalism of Bell's Theorem incorporates an explicit requirement for realistic (hidden variable) theories to which 2. does not apply - i.e. local realistic theories only. That requirement is the existence of a C to go along with polarizer angle settings A and B, the hypothetical other measurement that could have been performed. This leads to bounds on values that any such LR (LHV) theory can yield. And this has absolutely nothing to do with the measuring apparatus settings at A and B. It's all about throwing C into the equation. If there is no C, it's not realistic in the first place!

Bell's Theorem essentially says that the eight permutations below add to a total probabilty of 1:

[1] A+ B+ C+ (and the likelihood of this is >=0)
[2] A+ B+ C- (and the likelihood of this is >=0)
[3] A+ B- C+ (and the likelihood of this is >=0)
[4] A+ B- C- (and the likelihood of this is >=0)
[5] A- B+ C+ (and the likelihood of this is >=0)
[6] A- B+ C- (and the likelihood of this is >=0)
[7] A- B- C+ (and the likelihood of this is >=0)
[8] A- B- C- (and the likelihood of this is >=0)

...But that there are no such predictive values for A, B, C that match QM's predictions. You don't need to do an experiment to reach this conclusion. Without moving from my armchair, I can see that all LR theories respecting Bell make radically different predictions than QM. Therefore experiments that support QM also rule out LR.

After all, the purported local counter-explanations are essentially "un-realistic" in the first place; we are told to expect MORE correlations than unentangled random chance should allow! It is the reverse that should happen.

4. It makes no sense to say that the experiment was contaminated so that entanglement "appears to occur" even though there is no actual entanglement. That isn't really science. If it is there, where is it? What causes it? Measure it! Explain it! Why haven't we ever noticed it before? Why doesn't it show up in experiments designed to look for it? Test results don't vary when heretofor unknown local causes are eliminated as a factor, and they don't vary when sampling is eliminated as a factor. If these were really loopholes, tests would have identified this. But instead, every Bell test shows the same results, LR is ruled out by 5/10/20/30+ standard deviations.

Perhaps someone should write a paper entitled "Can an experimental test of entanglement ever be considered complete?"
 
  • #110
Cat said:
Surely one experiment with perfect detectors has no effect on the logic of a loophole that is present when they are not perfect? [See the original paper on the subject of the detection loophole -- Pearle, P, “Hidden-Variable Example Based upon Data Rejection”, Physical Review D, 2, 1418-25 (1970)]

Cat

I don't think I follow your reasoning, assuming I have it correct. You are saying that the loophole is a loophole even if it is determined not to be a loophole?

Suppose you postulate that there is a variable that affects the results of an experiment. Then you measure that variable, and find that its contribution to the experimental result is zero. The conclusion, pure and simple, is that your postulated "loophole" is non-existent. Therefore it is no longer a loophole. Period.

Example: I hypothesize that experiments performed on Tuesday yield more correlations than those performed on Wednesday. Simply run the test on both days and now you know the answer - no effect. You don't need to run all subsequent tests on both Tuesday and Wednesday to know they are valid.

How could it ever be otherwise? (Unless of course, you simply reject the experiment in toto.)
 
Last edited:
  • #111
DrChinese said:
I don't think I follow your reasoning, assuming I have it correct. You are saying that the loophole is a loophole even if it is determined not to be a loophole?
Sorry, you're right: what I wrote was not logical. What I meant made sense but only because, as you suspected, I reject the Rowe et al experiment in toto. I maintain that none of the experiments to date have shown violations of a Bell inequality except in the presence of "real", functional, loopholes.

Cat
 
  • #112
DrChinese said:
...there are no such predictive values for A, B, C that match QM's predictions.
That's only true when the predictions are based on Light wave theory.
I don't think Dr E. Dr P. or Dr R had a problem with looking at light as a particle. Nether did Newton actually, but that was long before these issues. They just had a problem with the FTL implications of QM. AND suspected, hoped, believed, that a more complete description might help resolve how much FTL activity is real, if any.
Where “C” & LR is helpful in confirming that Wave Theory is incomplete and that light must be Quantized. I don't see where it's helpful in resolving the EPR issue.

HOWEVER I do find the implications of the A-B test very interesting as I believe I've made an incorrect assumption there! I'd like to take a closer look at the 100% correlation expected by QM and reported by the testing, Shouldn't the expected correlation be 75%??

Just to review the test, The entangled generator is producing photon pairs with polarization separated by a fix unchanging angle 0 or 90 depending on 'type'. But the base original angle for each set of pairs is random though 360 degrees. Therefore testers A and B can be rotated together to any measuring angle and the same result in rate & number of hits and the same 100% correlation is always found. ---Correct?-

Source of 75% & 100% correlation conflict-- a cross check of the test :
If we replace the "entangled" light source with TWO independent light sources that only generate photons with a fixed polarization, wired to generate simultaneous light signals, polarized and aligned to match the original 0 or 90 degree separation type as needed. But to duplicate the random base alignment of each photon pair, like the original entangled source generates, their base angle is incrementally advanced around 360 degrees for equal time intervals of the total test.
Under these conditions A and B will always see the light pass in V alignment and always blocked in the H alignment,100% correlation. But when the alignment is on the diagonal 50% of the signals will go through, and only a 50% chance of correlation.
The end result is total testing will give 75 % correlation.
Has a test run like this been done? Seems pretty simple I'd hope someone has.
IS there any doubt this is true?

YET, Testing with the entangled source shows 100% correlation!
Isn't this more significant than the arguing over loopholes? How does the correlation on the diagonals get to 100%?
(Some FTL function insures common interaction with off baseline angled filters?)
OR (maybe a better description should be possible as Mr. E P R argued?)
Has any attention been given this? - I.E. has anyone even tried to propose a non-QM explanation?
Does QM provide any explanation? Or is having a statistical form that correctly predicts the results all that can be expected of QM?
 
  • #113
RandallB said:
Originally Posted by DrChinese: ...there are no such predictive values for A, B, C that match QM's predictions.

1. That's only true when the predictions are based on Light wave theory.

2. HOWEVER I do find the implications of the A-B test very interesting as I believe I've made an incorrect assumption there! I'd like to take a closer look at the 100% correlation expected by QM and reported by the testing, Shouldn't the expected correlation be 75%??

Just to review the test, The entangled generator is producing photon pairs with polarization separated by a fix unchanging angle 0 or 90 depending on 'type'. But the base original angle for each set of pairs is random though 360 degrees. Therefore testers A and B can be rotated together to any measuring angle and the same result in rate & number of hits and the same 100% correlation is always found. ---Correct?-

3. Source of 75% & 100% correlation conflict-- a cross check of the test :
If we replace the "entangled" light source with TWO independent light sources that only generate photons with a fixed polarization, wired to generate simultaneous light signals, polarized and aligned to match the original 0 or 90 degree separation type as needed. But to duplicate the random base alignment of each photon pair, like the original entangled source generates, their base angle is incrementally advanced around 360 degrees for equal time intervals of the total test.
Under these conditions A and B will always see the light pass in V alignment and always blocked in the H alignment,100% correlation. But when the alignment is on the diagonal 50% of the signals will go through, and only a 50% chance of correlation.
The end result is total testing will give 75 % correlation.
Has a test run like this been done? Seems pretty simple I'd hope someone has.
IS there any doubt this is true?

YET, Testing with the entangled source shows 100% correlation!
Isn't this more significant than the arguing over loopholes? How does the correlation on the diagonals get to 100%?
(Some FTL function insures common interaction with off baseline angled filters?)
OR (maybe a better description should be possible as Mr. E P R argued?)
Has any attention been given this? - I.E. has anyone even tried to propose a non-QM explanation?
Does QM provide any explanation? Or is having a statistical form that correctly predicts the results all that can be expected of QM?

1. QM makes specific predictions in this case. You cannot devise a theory in which A, B and C have independent simultaneous reality and match QM.

2. Yes 100% is the QM prediction for 0 degrees. But there is no such thing as the "the base original angle for each set of pairs is random though 360 degrees" as you state. This is a classical picture and is inconsistent with observation.

3. Your example, assuming I understand it, would yield 75% instead of 100% as predicted by QM. Therefore, I would conclude that your example is not representative of what is actually happening - although it would respect Bell's Inequality.
 
  • #114
DrChinese said:
2. But there is no such thing as the "the base original angle for each set of pairs is random though 360 degrees" as you state. This is a classical picture and is inconsistent with observation.
What are you talking about "classical" here??
I'm referring to the entangled photon generator!

OR are you saying that a, parametric down converters, that generates perpendicularly polarized photon pairs (Type II) gives polarizations of 0 degrees and 90 degrees only!
With the twins never being at the same angle, (Type I, always the same) BUT always at ether 0 or 90 degrees and never any other angle?
Thus the test in A must set 0 to 0 degrees and B must set 0 to 90 degrees (Type II) for the test to work. Because if they were set to say -45 and +45 a test run still have light coming through half the time but it would only have & QM would predict a 50% correlation??
Thus one of the critical settings is aligning the observer’s polarization to the alignment of the down converter that produces the entangled photons??

If so I had miss-understood the test configuration, and the need to aline the test to the down converter, which bothers me a bit.
DO I have the above correct??

RB
(Also do you know if I have Type I and Type II defined correctly)
 
  • #115
RandallB said:
What are you talking about "classical" here??
I'm referring to the entangled photon generator!

You said "random through 360 degrees" and there is nothing random through 360 degrees. The light is polarized upon going through the prisms, not before.

I am fine with what you describe otherwise.
 
  • #116
DrChinese said:
You said "random through 360 degrees" and there is nothing random through 360 degrees. The light is polarized upon going through the prisms, not before.

I am fine with what you describe otherwise.
I'n never understood that the source light was polarized in fixed positions before. I thought they could come out of the parametric down converter at any angle - just that they had to a fixed angle 0 or 90 from each other.

I think that they are always fixed at 0 or 90 degrees in relation to outside observers, well I think that's a problem for me. I'll need to think on it a bit.

RB
 
  • #117
RandallB said:
I'n never understood that the source light was polarized in fixed positions before. I thought they could come out of the parametric down converter at any angle - just that they had to a fixed angle 0 or 90 from each other.

I think that they are always fixed at 0 or 90 degrees in relation to outside observers, well I think that's a problem for me. I'll need to think on it a bit.

RB

You have it fine... all I mean is that it is "unpolarized" before it is polarized. The point I am making by saying it is unpolarized is that the polarization is neither known nor definite.
 
  • #118
DrChinese said:
You have it fine... all I mean is that it is . The point I am making by saying it is unpolarized is that the polarization is neither known nor definite.
"unpolarized" before it is polarized by the parametric down converter. Correct.

In our experiments the photon that is coming to the test area is known to be polarised. And that it must be polerized in either a H or V direction based on how parametric down converters work. Never at any other angle. It only the "either or" part that not known. The two angles they can come in at H and V or 0 and 90 degrees is known in advance.

Just want to be sure I have this right, becuse it is not what I'd thought before.
 
  • #119
RandallB said:
"unpolarized" before it is polarized by the parametric down converter. Correct.

In our experiments the photon that is coming to the test area is known to be polarised. And that it must be polerized in either a H or V direction based on how parametric down converters work. Never at any other angle. It only the "either or" part that not known. The two angles they can come in at H and V or 0 and 90 degrees is known in advance.

Just want to be sure I have this right, becuse it is not what I'd thought before.

No, PDC type I produces pairs that are linearly polarized but the only thing we know is that they are parallel. Type II produces perpendicular pairs.

Here is a spot that discusses a bit: http://scotty.quantum.physik.uni-muenchen.de/exp/psrc/entangle.html

If the actual polarization was known, the spin correlation would cease.
 
Last edited by a moderator:
  • #120
DrChinese said:
No, .
No to what?
The link you provided seems to confirm what I said - That is both
PDC type I and type II produces polarizations that are V or H. Photon pairs might be 1) VV or HH OR 2) VH or HV ; depending on type. But as I said never at any other angle other than 0 or 90 (like a 15 & 15 for a type I).

BTW Thanks for confirming I understood correctly in calling Type II as the one making perpendicular pairs.

Maybe you miss-understood what I'd said?

If the actual polarization was known, the spin correlation would cease.
Well sure, TESTER "knows" the photon will be 'either' H 'or' V but it is unknown what it could be till it is tested. More than unknown, from a QM view it is "Undetermined".
 

Similar threads

Replies
58
Views
4K
Replies
80
Views
7K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 27 ·
Replies
27
Views
3K
Replies
39
Views
5K
  • · Replies 100 ·
4
Replies
100
Views
11K
  • · Replies 47 ·
2
Replies
47
Views
5K
  • · Replies 96 ·
4
Replies
96
Views
7K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 16 ·
Replies
16
Views
3K