Bell experiment would somehow prove non-locality and information FTL?

Click For Summary
The Bell experiment illustrates quantum entanglement, where two particles created together exhibit correlated properties, such as spin, regardless of the distance separating them. Observing one particle determines the state of the other, leading to interpretations of non-locality or faster-than-light information transfer. However, some argue that the particles' states are predetermined at creation, and the act of measurement merely reveals these states without invoking non-locality. The discussion also touches on Bell's Theorem, which posits that local realism cannot coexist with quantum mechanics, suggesting that hidden variable theories are insufficient. Ultimately, the debate centers on the nature of reality and measurement in quantum mechanics, emphasizing that the observed correlations do not imply any mysterious influence or faster-than-light communication.
  • #151
wm said:
Doc, this seems a strange use of the HUP? It had not occurred to me that you were using it that way.

And surely you are not correct? By testing one particle I can observe its pristine reaction to an a setting. By testing the other particle I can observe its pristine reaction to a b setting. I have THUS learned something MORE about each twin!

Given one particle only, HUP says this is impossible: and I agree; its that quanta again; a particle is pristine ONCE ONLY. BUT: Given two, its surely common-sense that we learn MORE about each?

Am I missing something here? wm

Alice tests her particle in the "pristine" condition for setting A, and then Bob tests his particle in the "pristine" condition for setting B. Yes, it is common sense that we learned something about Bob's particle from the test we did on Alice. But that would in fact violate the HUP, and so it turns out that the common sense explanation is false.

For us to have learned about Bob's particle at setting A, we would need to be able to perform another test on that particle at setting A and get the answer we expect due to our test on Alice at setting A. If you actually perform such a test, you do not get any higher match rate (we would be looking for more perfect correlations at that point).

The above holds true REGARDLESS of the order or timing of the 2 measurements. So the HUP (Heisenberg Uncertainty Principle) is not violated.

In my opinion, both the HUP and Relativity are fundamental and important principles that guide how we observe particle events. These are both consistent with Bell's Theorem.
 
Physics news on Phys.org
  • #152
DrChinese said:
1. Actual experiments are designed to rule out other sources.

2. It actually should be .25 for matches and .75 for mismatches if we have 1.0 for the matches in your I. case. This comes from the cos^2 rule with 120 degrees as the difference in settings.

3. This is correct, we need to consider both of these together.

4. Good, you are seeking possible explanations for the results. And now we find ourselves considering new physical phenomena not otherwise known... such as backwards in time signaling and hypothetical effects derived from previously unknown sources. But these have severe theoretical problems too, since they only appear for entangled particles.

5. It is not a requirement that there is a common source, but that is certainly one possibility.

Your general line of approach is definitely improving.
Yeah, I'm learning :smile:

And some remarks:

2. I got it the other way round in my imaginary case, but the arguments I pose are uneffected, so regard the experiment as if .25 chance for matches and .75 for mismatches.

3. Yes. Otherwise, we could just make the reasoning that nothing else but some entirely within the box just calculates values. Although I assume that at the basis that there IS a (hidden ?) common source, which is TIME.
Therefore - in that reasoning - I expect that the simutaneity DOES matter significantly (it must match EXACTLY).

4. Well, that is a bit of a hypothesis, but not entirely on my own, since I remember remarks by Feynmann, who has claimed that a positron just behaves (in the mathematical description) as an electron moving backward in time. Now a photon would be it's own anti-particle...

[ And as a side note, usually we would say (from common sense, or formal logic) it is nonsence, since this sort of thing would assume that an effect could predate a cause. Again that is something with which dialectics has no trouble in understanding, which is another example of why these preconditions of formal logic and/or formal thinking often limits us in seeing what happens. ]

5. Not a requirement? I think that it follows from 3... We have by definition a common source, since time is a common source. But that is not in itself enough, there must be a way for each detector to be influenced by the other detector (an influence on the value we obtain from the detector). That is the whole point I tried to make.
 
Last edited:
  • #153
heusdens said:
This last means, with 50/50 chance I suppose??

And if it does not matter if there is coincidence, there is something I don't understand then. My reasoning would be, it matters a lot that they are exactly in sync. How could an out of sync measurements be statistically random (for every separate measurement) AND correlated with the other measurement?

That is not clear to me.



And what about the other probabilities?

++ / -- has 50/50 (relative) probablity?

+- / -+ also 50/50 (relative) probability?

(in my imaginary experiment, I just assume to be the case, as well as that the negative correlated fraction has probability of 25%)

1. The odds of the ++ case always equals the -- case.
2. The odds of the +- case always equals the -+ case.
3. BUT odds of the ++ case DOES NOT equal the +- case (except at very specific settings which are not worth discussing). Ditto for the other permutations.

Yes, it is strange. Each stream of outcomes follows a perfectly random sequence. Each sequence will individually contain an equal number of + and - outcomes. Yet there will be a discernible pattern when these streams are correlated and it WILL violate Bell's Inequality.
 
  • #154
heusdens said:
Although I assume that at the basis that there IS a (hidden ?) common source, which is TIME.
Therefore - in that reasoning - I expect that the simutaneity DOES matter significantly (it must match EXACTLY).

There is a "common source" of the photons themselves, because they are actually described by a single wave function until one of them is observed. At that point the combined wave function collapses.

Now, when it comes to the time of measurement/observation, it does NOT matter to the results whether one is measured first or not. (The order of the observations is not relevant to Bell tests at least, although it might be noticable for Quantum Erasers or certain other experimental setups.)

The easiest way to see this is by examing the following 3 cases:

A. Alice and Bob's detectors are exactly 10 meters from the PDC source (in opposite directions).
B. Alice's detector is exactly 10 meters from the PDC source, while Bob's detector is exactly 50 meters from the PDC source (in opposite directions).
C. Alice's and Bob's detectors are exactly 10 meters from the PDC source, and are co-located, but Bob's photon is routed through fiber for an extra 40 meters before arriving at the detector.

All Bell tests will yield exactly the same results in all 3 of the above cases. In all cases, we always define a time window for coincidence counting - usually something like +/- 20 nanoseconds. The actual choice is made based on the particular experimental setup, the laser intensity, etc.
 
  • #155
DrChinese said:
There is a "common source" of the photons themselves, because they are actually described by a single wave function until one of them is observed. At that point the combined wave function collapses.

Now, when it comes to the time of measurement/observation, it does NOT matter to the results whether one is measured first or not. (The order of the observations is not relevant to Bell tests at least, although it might be noticable for Quantum Erasers or certain other experimental setups.)

The easiest way to see this is by examing the following 3 cases:

A. Alice and Bob's detectors are exactly 10 meters from the PDC source (in opposite directions).
B. Alice's detector is exactly 10 meters from the PDC source, while Bob's detector is exactly 50 meters from the PDC source (in opposite directions).
C. Alice's and Bob's detectors are exactly 10 meters from the PDC source, and are co-located, but Bob's photon is routed through fiber for an extra 40 meters before arriving at the detector.

All Bell tests will yield exactly the same results in all 3 of the above cases. In all cases, we always define a time window for coincidence counting - usually something like +/- 20 nanoseconds. The actual choice is made based on the particular experimental setup, the laser intensity, etc.

This assumes a continuous stream of photons?

What if we change the setup so that there is one photon per time unit?

I assume either, the outcomes then will be totally different, or in that case, the sync must match.

I mean it would be rather like taking the data of one of the detectors, and place them in a queue - in order for there being a out of sunc on purpose - before we qualify them as either matching or non-matching [the 'coincidence' monitor[, etc. which can not give the same result.
Or differently: we just make visible and separate output of both detectors.
If we combine the results to see if there is coincidence, we must of course match corresponding results, and if we do not, this can not give the same probability for coincidences.
Place them in two tables, like this:

Detector #1

Seqno Setting Result
1 A +
2 B +
3 C -

etc.

and same for

Detector #2

Seqno Setting Result
1 C +
2 B +
3 A -

etc.

Now we should combine of course dector results with the same seq no.
If we make an arbitrary choice, like combining detector #1 seqno 1 with detector #2 seqno 3, we of course get invalid results.
 
  • #156
JesseM said:
Right, assuming we've switched from the assumption that Bob and Alice always get opposite results when they perform the same measurement to your new assumption that they always get the same result when they perform the same measurement. Again though, it's really confusing to have +1 represent both a possible result of one person's measurement and an outcome where they both got the same results, so I suggest using my notation S and D instead. Agreed. If you think you have a counterexample, or see a flaw in the short proof I gave, please present it.

Jesse, the flaw that I see is this: Your example relies for its success on the very limiting notion of Bell-reality; aka naive or strong reality. I have a counter-example, and am waiting for clarification re the rules (my post #74) here, so will contact you off-PF. (PS: I'm seeking to avoid the need for retyping it here.) wm
 
  • #157
And another remark. Suppose we emit like 100 discrete photons.
Does each get measured at both sides?
We assume the time unit is that large that we have a measurement within the time unit at both detectors.

Suppose in my example (previous post) of the tables of outcomes, that we just store the data somewhere, until all the photons are measured, and only then do a coincidence count.

In that case, and if then also we can make a arbitray synchronisation, I would be really baffled, that would mean sheer magic happens (already printed results must then suddenly change AFTER we did the experiment!).
 
  • #158
wm said:
Jesse, the flaw that I see is this: Your example relies for its success on the very limiting notion of Bell-reality; aka naive or strong reality.
If you're referring to the idea that measurement doesn't disturb the state, I specifically avoided using that assumption in my proof. I said that "a predetermined state of type A+ B- C- just means any state in which it is predetermined that if the experimenter chooses setting A she'll get +, if she chooses setting B or C she'll get -". So if you have a predetermined state X which is "of type A+ B- C-", you aren't assuming that the state X involves spin-up on the A axis and spin-down on the B and C axis prior to measurement, you're just assuming that given the initial state X and a measurement on the A axis this will deterministically cause the experiment to register spin-up, and given a measurement on the B or C axis this will deterministically cause the experiment to register spin-down. That's assuming A, B, C are measurements of spins, but the point would remain the same if you were talking about some other properties--I'm not assuming the measurement reveals a preexisting property, just that each possible initial state will give a determinate response to each of the three measurements.
wm said:
I have a counter-example, and am waiting for clarification re the rules (my post #74) here, so will contact you off-PF. (PS: I'm seeking to avoid the need for retyping it here.) wm
I think the rules would allow you to post what you think is a classical example that violates a Bell inequality, provided you present it in a tentative way where you're asking for feedback on the example and willing to be shown that it doesn't really violate Bell's theorem, rather than presenting it as a definitive disproof of mainstream ideas (note that heusdens' 'three datastreams' idea was not edited or deleted by the moderators, for example).

I'd encourage you to write up your example and then post it here so we all can see it (presenting it in the tentative way I suggested), and then if it's deleted by the moderators you could always resend it to me via PM.
 
Last edited:
  • #159
DrChinese said:
Alice tests her particle in the "pristine" condition for setting A, and then Bob tests his particle in the "pristine" condition for setting B. Yes, it is common sense that we learned something about Bob's particle from the test we did on Alice. But that would in fact violate the HUP, and so it turns out that the common sense explanation is false.

For us to have learned about Bob's particle at setting A, we would need to be able to perform another test on that particle at setting A and get the answer we expect due to our test on Alice at setting A. If you actually perform such a test, you do not get any higher match rate (we would be looking for more perfect correlations at that point).

The above holds true REGARDLESS of the order or timing of the 2 measurements. So the HUP (Heisenberg Uncertainty Principle) is not violated.

In my opinion, both the HUP and Relativity are fundamental and important principles that guide how we observe particle events. These are both consistent with Bell's Theorem.

I see no violation of HUP here. I see no need to abandon common-sense.

Does this help: If we carry out identical tests on the separated singlet-correlated twins (particles, say photons), then we confirm that such pristine twins do indeed return a certain (identical) polarisation.

HUP says you can never get such confirmation on a single pristine particle -- it being no longer pristine after the first (even single quanta) interaction.

Of course the situation is different if you have Bell/naive/strong-realism in mind. So: Rather than abandon common-sense, just abandon that erroneous realism (I say). wm
 
  • #160
heusdens said:
This assumes a continuous stream of photons?

What if we change the setup so that there is one photon per time unit?

I assume either, the outcomes then will be totally different, or in that case, the sync must match.

I mean it would be rather like taking the data of one of the detectors, and place them in a queue - in order for there being a out of sunc on purpose - before we qualify them as either matching or non-matching [the 'coincidence' monitor[, etc. which can not give the same result.
Or differently: we just make visible and separate output of both detectors.
If we combine the results to see if there is coincidence, we must of course match corresponding results, and if we do not, this can not give the same probability for coincidences.
Place them in two tables, like this:

Detector #1

Seqno Setting Result
1 A +
2 B +
3 C -

etc.

and same for

Detector #2

Seqno Setting Result
1 C +
2 B +
3 A -

etc.

Now we should combine of course dector results with the same seq no.
If we make an arbitrary choice, like combining detector #1 seqno 1 with detector #2 seqno 3, we of course get invalid results.
Each photon has only a single entangled "twin" that it can be matched with, you don't have a choice of which measurement of Alice's to match with which measurement of Bob's, if that's what you're asking. The spin-entanglement is because the two photons were emitted simultaneously by a single atom...they don't necessarily have to be measured at the same time though (Alice and Bob could be different distances from the atom the photons were emitted, so the photons would take different times to reach them).
 
  • #161
heusdens said:
This assumes a continuous stream of photons?

What if we change the setup so that there is one photon per time unit?

I assume either, the outcomes then will be totally different, or in that case, the sync must match.

As you come to learn how the experiment is calibrated, it becomes easier to visualize. There are a number of sub-steps in the actual process that are usually skipped over for the sake of brevity and comprehension.

It is easy know when the apparatus is properly calibrated: you get perfect correlations at identical angle settings. Keep that in mind. That way there is no possibility of a mistake.

1. A PDC outputs perhaps 2,000* entangled pairs per second. They occur at semi-random intervals. Each photon detected is recorded as being at a precise timestamp, along with whether it is a + or a -.

2. A window of 1/2000 would be too big, there might be 2 pairs in it (and that would be no good). So you reduce the time window to a much smaller amount: 100 ns or less. There are 10,000,000 such windows in one second. This allows you to be sure you are matching up the right pair. If you are not ending up with near 100% matches, you have more calibration to do.

3. Once you have a time calibrated system, you know how to match up the pairs - regardless of the polarizer settings at either end. They must be calibrated as well so that 0 degrees on one is comparable to 0 degrees on the other. The same principle is used for such calibration.

4. So now you have a time and angle calibrated system, and you are ready to perform your Bell test.

*There are *many* more photons coming from the laser pump (the input to the PDC). These are easily diverted because a PDC crystal (generously) sends the desired entangled pairs out at a slightly different path than the non-entangled photons. It is as if all the desired photons come out one door, and the unused remainder (the ones you don't want to see anyway) go out another. You know you have cut through all the technical issues in the end, because you always start with the perfect correlations.
 
Last edited:
  • #162
wm said:
HUP says you can never get such confirmation on a single pristine particle -- it being no longer pristine after the first (even single quanta) interaction.

That is correct. One of the observations is always first. At that point the entangled wave functions collapses, and they are two independent photons. Once you work through it, you will see that the observed results are identical regardless of which is actually measured first. And you NEVER learn anything more about one photon than the Heisenberg Uncertainty Principle (HUP) allows. This can be easily demonstrated.

As ZapperZ has said many times, there is nothing to prevent you from measuring non-commuting particle attributes to any level of precision you desire (apparently defeating the HUP). The problem is that you still don't violate the HUP: because you are NOT learning anything more about that particle. Each measurement changes the state of the particle. If you like, I can explain this point further.
 
  • #163
heusdens said:
And another remark. Suppose we emit like 100 discrete photons.
Does each get measured at both sides?

If you get 100 entangled pairs, you will have a very high rate of matches (seen at both detectors within the time window). The very small number that are not matched fit within the inefficiency of the overall apparatus.

I *strongly* suggest you do not attempt to challenge the actual experimental methodology until you *fully* understand it. It has some complexity to it, and you will need to read at least half a dozen experiments in detail to get a fair idea of what is actually going on. Suffice it to say that the theoretical and practical optics have been carefully reviewed by hundreds of the world's leading scientists. (This is not really the right thread to be working through such issues anyway. And if you want to critique the experiments, you should do your homework thoroughly first.)
 
  • #164
DrChinese said:
That is correct. One of the observations is always first. At that point the entangled wave functions collapses, and they are two independent photons. Once you work through it, you will see that the observed results are identical regardless of which is actually measured first. And you NEVER learn anything more about one photon than the Heisenberg Uncertainty Principle (HUP) allows. This can be easily demonstrated.

As ZapperZ has said many times, there is nothing to prevent you from measuring non-commuting particle attributes to any level of precision you desire (apparently defeating the HUP). The problem is that you still don't violate the HUP: because you are NOT learning anything more about that particle. Each measurement changes the state of the particle. If you like, I can explain this point further.

Further explanation would be appreciated; for I'm still in disagreement. NOT violating HUP (that's for sure), but surely learning more about the other twinned particle? wm
 
  • #165
wm said:
Further explanation would be appreciated; for I'm still in disagreement. NOT violating HUP (that's for sure), but surely learning more about the other twinned particle? wm

Sure, I can explain.

We have Alice and Bob. We measure Alice at 0 degrees and she is a +. We deduce (because Alice and Bob are twins) that Bob is also a plus at 0 degrees. We measure Bob at 120 degrees and see that Bob is a -. So now we know Bob is a + at 0 and a - at 120.

NOPE. That is demonstrably wrong. If it were true, then we could measure Bob at 0 degrees now and expect to get a + every time. But that is not what happens! We get many pluses, but enough minuses to convince us that we don't know anything about Bob at 0 degrees at all.

On the other hand, we can re-measure Bob at 120 degrees all day long. Each re-test will give exactly the same results, a - at 120 degrees! So obviously the physical act of having the photon move through a polarizer is not itself changing the photon - because the photon does not appear to be changing.

So we did not, in the end, learn anything more than is allowed about a single particle.

The HUP is the limiting factor. Most of the time, the influence of the HUP is ignored in discussion of Bell... but it shouldn't be. This is the exact point that EPR attempted to exploit originally.

So what actually happened in our example above? Below is the correct explanation.

We have entangled Alice and Bob. We measure Alice at 0 degrees and she is a +. We deduce (because Alice and Bob are twins) that Bob is also a + at 0 degrees. We measure Bob at 120 degrees and see that Bob is a -. This will occur 75% of the time when we have Bob as a + at 0 degrees. Malus' Law - cos^2 theta - governs this. It does not matter that we did not measure Bob at 0 degrees, Bob acts as if we did that measurement first anyway... because we measured Alice (Bob's entangled partner) at 0 degrees. Once that measurement is performed on Alice, Bob acts accordingly; but the pair are no longer entangled. You may not like the explanation, but that is in fact the mechanics of how to look at it - andpredict the results.
 
  • #166
JesseM said:
I think the rules would allow you to post what you think is a classical example that violates a Bell inequality, provided you present it in a tentative way where you're asking for feedback on the example and willing to be shown that it doesn't really violate Bell's theorem, rather than presenting it as a definitive disproof of mainstream ideas (note that heusdens' 'three datastreams' idea was not edited or deleted by the moderators, for example).

I'd encourage you to write up your example and then post it here so we all can see it (presenting it in the tentative way I suggested), and then if it's deleted by the moderators you could always resend it to me via PM.

Jesse, thanks for this. I'd welcome critical, correctional and educational comments on the following first-DRAFT.

In response to recent posts: Is this a classical refutation of Bell's theorem?

1. Let's modify a typical Aspect/Bell test (using photons) in the following way, retaining no significant connection between Alice's detector (oriented a, orthogonal to the line-of-flight axis) and Bob's (oriented b, orthogonal to the line-of-flight axis). (a and b are unit vectors, freely chosen.)

2. We place the Aspect-style singlet-source in a box. The LH and RH sides of the box (facing Alice and Bob respectively) each contain a dichotomic-polariser, the principal axis of which is aligned with the principal axis (say) of the box. (We thus have a classical source of correlated photon-pairs in identical but unknown states of linear polarisation.)

3. Unbeknown to Alice, her dichotomic polariser-analyser (detector) is yoked to the box such that her freely-chosen detector-setting (principal axis at unit-vector a) becomes also the setting of the principal axis of the box.

4. Typically (and beyond their control), Alice's results vary +1 xor -1; Bob's results likewise. Our experimental setting thus satisfies the boundary conditions for Bellian-inequalities based on this +1 xor -1 relation (cf Peres, Quantum Theory 1995: 162).

5. From classical analysis, and in a fairly obvious notation, the following equations hold:

(1) P(AB = S|ab) = cos^2(a, b),
(2) P(AB = D|ab) = sin^2(a, b);

where P = Probability. S = Same result (+1, +1) or (-1, -1) for Alice and Bob; D = Different result (+1, -1) or (-1, +1). Thus P(BC = S|bc) would denote the probability of Alice and Bob getting the same (S) individual results (+1, +1) xor (-1, -1) under the respective test conditions b (Alice) and c (Bob); that is, the first given result is Alice's (here B); the second Bob's (here C); etc.

6. Now: The boundary conditions that we have satisfied yield (via a typical Bellian analysis, deriving a typical Bellian-inequality -- cf https://www.physicsforums.com/showpost.php?p=1215927&postcount=151 ):

(3) P(BC = S|bc) - P(AC = D|ac) - P(AB = S|ab) less than or equal to 0.

7. However: For the differential-direction set {(a, b) = 67.5°, (a, c) = 45°, (b, c) = 22.5°} we have from (1) and (2):

(4) LHS (3) = 0.85 - 0.5 - 0.15 = 0.2.

Comparing RHS (3) with (4), we conclude: The Bellian-inequality (3) is (in general) FALSE!

Any and all comments will be appreciated, wm
 
Last edited:
  • #167
It's obviously not a refutation of Bell's theorem. The question is if it's a classical violation of Bell's theorem. It's not, because one of the hypotheses (parameter independence) is violated: P(B=+|ab) is dependent on a.
 
Last edited:
  • #168
DrChinese said:
Sure, I can explain.

We have Alice and Bob. We measure Alice at 0 degrees and she is a +. We deduce (because Alice and Bob are twins) that Bob is also a plus at 0 degrees. We measure Bob at 120 degrees and see that Bob is a -. So now we know Bob is a + at 0 and a - at 120.

NOPE. That is demonstrably wrong. If it were true, then we could measure Bob at 0 degrees now and expect to get a + every time. But that is not what happens! We get many pluses, but enough minuses to convince us that we don't know anything about Bob at 0 degrees at all.

On the other hand, we can re-measure Bob at 120 degrees all day long. Each re-test will give exactly the same results, a - at 120 degrees! So obviously the physical act of having the photon move through a polarizer is not itself changing the photon - because the photon does not appear to be changing.

So we did not, in the end, learn anything more than is allowed about a single particle.

The HUP is the limiting factor. Most of the time, the influence of the HUP is ignored in discussion of Bell... but it shouldn't be. This is the exact point that EPR attempted to exploit originally.

So what actually happened in our example above? Below is the correct explanation.

We have entangled Alice and Bob. We measure Alice at 0 degrees and she is a +. We deduce (because Alice and Bob are twins) that Bob is also a + at 0 degrees. We measure Bob at 120 degrees and see that Bob is a -. This will occur 75% of the time when we have Bob as a + at 0 degrees. Malus' Law - cos^2 theta - governs this. It does not matter that we did not measure Bob at 0 degrees, Bob acts as if we did that measurement first anyway... because we measured Alice (Bob's entangled partner) at 0 degrees. Once that measurement is performed on Alice, Bob acts accordingly; but the pair are no longer entangled. You may not like the explanation, but that is in fact the mechanics of how to look at it - andpredict the results.

DocC; No, no, no; surely not? Why talk about classical objects like Alice and Bob; why not talk about a single twinned-pair of particles that they ''measure''? I truly believe that you are caught up in Bellian-realism:

FOR, using your terms, you say: We measure Alice at 0 degrees and she is a +. We deduce (because Alice and Bob are twins) that Bob is also a plus at 0 degrees.

My realism allows me to deduce no such thing. Alice is a+ after a perturbing measurement interaction at 0 degrees. (Think of measuring how high she jumps in the 0 direction when hit on the toe with a sledge-hammer.) So I deduce that Bob will perturb to a+ when measured at 0 degrees.

You say: We measure Bob at 120 degrees and see that Bob is a -. So now we know Bob is a + at 0 and a - at 120. But Bob (heretofore pristine) has only been perturbed by the 120 degree measurement; he hasn't experienced the sledge-hammer ''measurement''.

So you are assigning perturbing measurement outcomes to pristine unperturbed objects. My realism does not allow me to do that. What am I missing? For it seems to me that you are foist on your own use of HUP and the related perturbing effect of a single quantum; more impactful on a photon than a sledge-hammer on your toe? wm
 
  • #169
Hurkyl said:
It's obviously not a refutation of Bell's theorem. The question is if it's a classical violation of Bell's theorem. It's not, because one of the hypotheses (parameter independence) is violated: P(B=+|ab) is independent on a.

Do you mean ''dependent''?

But P(B=+|ab) = one-half for any a; which is independence (as it should be)?

That is: P(B=+|ab) = P(B=+|b) = one-half. Yes?

Thanks, wm
 
  • #170
wm said:
Jesse, thanks for this. I'd welcome critical, correctional and educational comments on the following first-DRAFT.

In response to recent posts: Is this a classical refutation of Bell's theorem?

1. Let's modify a typical Aspect/Bell test (using photons) in the following way, retaining no significant connection between Alice's detector (oriented a, orthogonal to the line-of-flight axis) and Bob's (oriented b, orthogonal to the line-of-flight axis). (a and b are unit vectors, freely chosen.)

2. We place the Aspect-style singlet-source in a box. The LH and RH sides of the box (facing Alice and Bob respectively) each contain a dichotomic-polariser, the principal axis of which is aligned with the principal axis (say) of the box. (We thus have a classical source of correlated photon-pairs in identical but unknown states of linear polarisation.)
I'm afraid I don't know enough about optics to follow this--what is a "dichotomic-polariser", for example? Also, if you say this is a purely classical situation which doesn't depend on specifically quantum properties of entangled photons, would it be possible to restate it in terms of Alice and Bob having *simulated* detectors on computers, and when they type in what measurement they want to make, the computer chooses what results to display based on a classical signal from the source (a string of 1's and 0's encoding information) which tells the computer the relevant properties of each simulated photon?
wm said:
3. Unbeknown to Alice, her dichotomic polariser-analyser (detector) is yoked to the box such that her freely-chosen detector-setting (principal axis at unit-vector a) becomes also the setting of the principal axis of the box.
But if the photons leave the box before Alice makes her choice of detector setting, how is that possible without the effects of Alice's choice traveling backwards in time? And since as I said I'm not very familiar with optics, can you explain the significance of the "principle axis of the box"? Are you just talking about the box emitting ordinary classical polarized light in such a way that if a polarization filter is oriented parallel to the axis of the box 100% of the light gets through, and if it's oriented at 90 degrees relative to the axis of the box 0% gets through, as in the demo here? If so, then in terms of my simulation question above, could you just have the source sending signals which tell Alice's computer the angle that each simulated photon is polarized, and when Alice inputs the angle of her simulated polarization filter the computer calculates the probability the simulated photon gets through?
 
Last edited:
  • #171
JesseM said:
I'm afraid I don't know enough about optics to follow this--what is a "dichotomic-polariser", for example?

Given a set of random photons, incident on a dichotomic polariser: half will be pass, polarised in line with the principal axis; and half will pass, polarised orthogonal to the principal axis.

Also, if you say this is a purely classical situation which doesn't depend on specifically quantum properties of entangled photons, would it be possible to restate it in terms of Alice and Bob having *simulated* detectors on computers, and when they type in what measurement they want to make, the computer chooses what results to display based on a classical signal from the source (a string of 1's and 0's encoding information) which tells the computer the relevant properties of each simulated photon?

Recall that Alice sets the orientation of the source. So you'd need that computer setting fed to the source. Then a twinned-pair of identical strings of randomised +1 and -1, each digit linked with the source setting, one string to each computer.

But if the photons leave the box before Alice makes her choice of detector setting, how is that possible without the effects of Alice's choice traveling backwards in time?

Good point; I should add to the effect that: Alice's re-orientation time is short in relation to the detector dwell time. So the mismatch -- the number of ''prior-orientation'' photons in transit -- has little effect on the overall probability distribution.

And since as I said I'm not very familiar with optics, can you explain the significance of the "principle axis of the box"?

The principal axis of the box is a reference-axis linking the orientation of the polarisers in the box to Alice's detector orientation. (For Alice unknowingly orients the box's princ. axis.) INCIDENTALLY: That unknowing bit is to ensure that Alice and Bob think that they're working with a genuine unmodified singlet source; for such is the correlation.

Are you just talking about the box emitting ordinary classical polarized light in such a way that if a polarization filter is oriented parallel to the axis of the box 100% of the light gets through, and if it's oriented at 90 degrees relative to the axis of the box 0% gets through, as in the demo here?

Sort of. The box emits photons in pairs, each pair polarised in the direction of the principal axis xor orthogonal thereto. The detectors have dichotomic polarisers; the setting of these axes allows the normal Malus' Law distribution. BUT 100% of photons pass due to the dichotomicity!

If so, then in terms of my simulation question above, could you just have the source sending signals which tell Alice's computer the angle that each simulated photon is polarized, and when Alice inputs the angle of her simulated polarization filter the computer calculates the probability the simulated photon gets through?

The design is such that Alice detects 100% of the principal-axis photons as polarised on her princ. axis (+1); and 100% of the orthogonal photons as polarised on her orthogonal axis (-1). The overall distribution is random; approx. 50% of each.

Hope this helps, wm
 
  • #172
JesseM said:
Each photon has only a single entangled "twin" that it can be matched with, you don't have a choice of which measurement of Alice's to match with which measurement of Bob's, if that's what you're asking. The spin-entanglement is because the two photons were emitted simultaneously by a single atom...they don't necessarily have to be measured at the same time though (Alice and Bob could be different distances from the atom the photons were emitted, so the photons would take different times to reach them).

That is the answer I was looking for. So it assumes perfect synchronization for the experiment to work.
 
  • #173
heusdens said:
That is the answer I was looking for. So it assumes perfect synchronization for the experiment to work.

You have to know which measurement goes with which one, because the measurements have to be matched upon entangled pairs. However, there are many ways to know this. With 100% efficient detectors, you simply have to COUNT the events (the 33th event at Alice will go with the 33th event at Bob).
Or you can send off "tags" which contain the pair number to both Alice and Bob. There doesn't need to be any time relationship, however.
Alice can keep her photons in different boxes for years, and THEN do the measurement - this wouldn't alter any result (if, of course, Alice had a box in which to keep photons for such a long time...).
This kind of experiment has been done (on smaller scales) by sending one of the photons in a long optical fibre, which is rolled up in a box. Of course the delay was not a year !
 
  • #174
wm said:
Jesse, thanks for this. I'd welcome critical, correctional and educational comments on the following first-DRAFT.

In response to recent posts: Is this a classical refutation of Bell's theorem?

...

6. Now: The boundary conditions that we have satisfied yield (via a typical Bellian analysis, deriving a typical Bellian-inequality -- cf https://www.physicsforums.com/showpost.php?p=1215927&postcount=151 ):

(3) P(BC = S|bc) - P(AC = D|ac) - P(AB = S|ab) less than or equal to 0.

7. However: For the differential-direction set {(a, b) = 67.5°, (a, c) = 45°, (b, c) = 22.5°} we have from (1) and (2):

(4) LHS (3) = 0.85 - 0.5 - 0.15 = 0.2.

Comparing RHS (3) with (4), we conclude: The Bellian-inequality (3) is (in general) FALSE!

Any and all comments will be appreciated, wm

wm,

The math you have is exactly correct. The conclusion you draw from it is not.

Your (3) is a standard presentation of the Bell Inequality.

Your (4) is a standard presentation of the predictions of QM, as relates to (3).

This shows that QM is incompatible with the Bell Inequality. It does NOT show that Bell's Inequality is violated for classical situations.
 
  • #175
wm said:
DocC; No, no, no; surely not? Why talk about classical objects like Alice and Bob; why not talk about a single twinned-pair of particles that they ''measure''? I truly believe that you are caught up in Bellian-realism:

FOR, using your terms, you say: We measure Alice at 0 degrees and she is a +. We deduce (because Alice and Bob are twins) that Bob is also a plus at 0 degrees.

My realism allows me to deduce no such thing. Alice is a+ after a perturbing measurement interaction at 0 degrees. (Think of measuring how high she jumps in the 0 direction when hit on the toe with a sledge-hammer.) So I deduce that Bob will perturb to a+ when measured at 0 degrees.

You say: We measure Bob at 120 degrees and see that Bob is a -. So now we know Bob is a + at 0 and a - at 120. But Bob (heretofore pristine) has only been perturbed by the 120 degree measurement; he hasn't experienced the sledge-hammer ''measurement''.

So you are assigning perturbing measurement outcomes to pristine unperturbed objects. My realism does not allow me to do that. What am I missing? For it seems to me that you are foist on your own use of HUP and the related perturbing effect of a single quantum; more impactful on a photon than a sledge-hammer on your toe? wm

Perhaps I was not clear. Alice and Bob are intended to be synonymous with the entangled photons and their measurement. So I am providing a description of quantum objects. Yes, I totally agree that these are only "pristine" (to use your term) once. So no disagreement there.

But you say that Bob is not affected when the "sledgehammer" is applied to Alice. Oh, but that is NOT true at all! Bob absolutely acts as if he was given the same sledgehammer as Alice! That is the non-local collapse of the wave function, and it is definitely and demonstrably instantaneous. If this did not happen, we wouldn't have anything interesting to discuss in this thread. :smile:
 
  • #176
wm said:
3. Unbeknown to Alice, her dichotomic polariser-analyser (detector) is yoked to the box such that her freely-chosen detector-setting (principal axis at unit-vector a) becomes also the setting of the principal axis of the box.

I do not understand the point of this twist. This is simply the same effect as if her setting remained at 0 degrees all of the time, while Bob varies his setting. So what if Alice's setting is chained to the reference of the source, and she doesn't know it?

Can you explain how this changes the setup or the results? Otherwise, I think it confuses the issues for most readers.
 
  • #177
wm said:
Do you mean ''dependent''?

But P(B=+|ab) = one-half for any a; which is independence (as it should be)?

That is: P(B=+|ab) = P(B=+|b) = one-half. Yes?

Thanks, wm
Yes, dependent. Hrm, did I misread the problem? Oh well, I'll think about it later.
 
  • #178
DrChinese said:
wm,

The math you have is exactly correct. The conclusion you draw from it is not.

Your (3) is a standard presentation of the Bell Inequality.

Your (4) is a standard presentation of the predictions of QM, as relates to (3).

This shows that QM is incompatible with the Bell Inequality. It does NOT show that Bell's Inequality is violated for classical situations.

Doc, thanks for this; but I'm not clear on your NOT. I tried to use no QM at all in deriving (4); that is, I tried to leave QM out of the whole affair by reducing the experiment to be only classical mechanics (CM) at the level (say) of high-school physics.

But the fact that basic CM and QM agree does not mean that I have used QM. (NB: I agree that some ''CM'' situations will be compatible with BI.) So:

1. Why do you imply (as I read it) that I have used QM?

2. And doesn't my example show a classical situation that provides the basis for refuting many Bellian inequalities in classical settings?

3. My personal belief is that BT is no part of QM. I think Peres at least agrees. Is it accepted and valid to say ''BT is no part of QM''?

Thanks, wm
 
  • #179
DrChinese said:
I do not understand the point of this twist. This is simply the same effect as if her setting remained at 0 degrees all of the time, while Bob varies his setting. So what if Alice's setting is chained to the reference of the source, and she doesn't know it?

Can you explain how this changes the setup or the results? Otherwise, I think it confuses the issues for most readers.

Doc, There is no change in RESULTS, the outcome being exactly (mathematically) equivalent; the simplest classical refutation (PS: in my old terms, pending a reply to my last post) of BI that I yet know of. BUT:

I thought it best not to have Alice (it could alternatively be Bob) reduced to a cipher/zombie. AND I thought of building a simple class-room demonstration (electric light bulb, swapping mono-polarisers, measure intensities, etc). So I then wanted a student at each end to be able to independently change each detector (with the mechanism hidden); to essentially provide a realistic simulation of Aspect's experiment with the exact same correlations.

It also makes the CM maths to require just a bit more thought.

SO: Considering this, and in that I have already been given a few suggestions for improvement, do you think I should go with the simplified version here? OR: Should we encourage readers to think the given model through? (Since they can readily be helped and it's not that difficult.)

Thanks, wm
 
  • #180
DrChinese said:
Perhaps I was not clear. Alice and Bob are intended to be synonymous with the entangled photons and their measurement. So I am providing a description of quantum objects. Yes, I totally agree that these are only "pristine" (to use your term) once. So no disagreement there.

But you say that Bob is not affected when the "sledgehammer" is applied to Alice. Oh, but that is NOT true at all! Bob absolutely acts as if he was given the same sledgehammer as Alice! That is the non-local collapse of the wave function, and it is definitely and demonstrably instantaneous. If this did not happen, we wouldn't have anything interesting to discuss in this thread. :smile:

Doc, I really appreciate your hanging in there on this! Which brings me to two issues:

1. I think that I have a better explanation of this whole sorry business. BUT it's essentially ''personal research''. YET it would surely benefit by public discussion here. SO do you think it possible that a special discussion could be opened up, something between the existing PF rules of the open discussion we're having now and the ''personal research'' section.

That is: It would be open public discussion but the Admins would have agreed that it was not totally crackpot stuff? It might carry a warning re ''speculative''? OR: Is there somewhere else on the web better suited for discussion of such stuff?

2. Now, returning to the topic: You say above (slight edit)
wm says that Bob is not affected when the "sledgehammer" is applied to Alice. Oh, but that is NOT true at all! Bob absolutely acts as if he was given the same sledgehammer as Alice! That is the non-local collapse of the wave function, and it is definitely and demonstrably instantaneous.

Question: You appear to be conjoining a quite-magical long-distance physical effect with that ''non-local collapse of the wave-function'' with [sic] something ''definitely and demonstrably instantaneous''. Could you elaborate, please?

Like: If ''demonstrably'' has its old meaning, and if ''seeing is believing'', it seems that a clear and definitive case can be made for your position? Yet my impression is that each of the three conjoined phrases is problematic?

(PS: I'll likely open a new thread on a related subject: LOCALITY or Local QM.)

Thanks again, wm
 

Similar threads

  • · Replies 59 ·
2
Replies
59
Views
7K
Replies
8
Views
1K
  • · Replies 42 ·
2
Replies
42
Views
3K
  • · Replies 50 ·
2
Replies
50
Views
7K
  • · Replies 12 ·
Replies
12
Views
3K
Replies
41
Views
5K
  • · Replies 24 ·
Replies
24
Views
2K
Replies
42
Views
5K
Replies
2
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K