Entanglement spooky action at a distance

  • Thread starter Thread starter Dragonfall
  • Start date Start date
  • Tags Tags
    Entanglement
Click For Summary
Entanglement, often referred to as "spooky action at a distance," is explained through Bell's Theorem, which shows that the outcomes of measurements on entangled particles are correlated in a way that defies the notion of independent randomness. The correlation follows a specific formula supported by experimental evidence, rejecting simpler models that suggest random outcomes. The design of EPR-Bell tests involves synchronized detection events that create interdependencies between measurements, which do not imply faster-than-light (FTL) communication. Discussions also highlight that the correlations arise from shared properties at the quantum level rather than any FTL influence or random pairing. Overall, the consensus is that entanglement does not facilitate instantaneous information transfer, as no physical evidence supports FTL transmissions.
Dragonfall
Messages
1,023
Reaction score
5
Entanglement "spooky action at a distance"

Why can't we think of entanglement as simply committing (without knowledge) to a random outcome, instead of "spooky action at a distance"?
 
Last edited by a moderator:
Physics news on Phys.org


Dragonfall said:
Why can't we think of entanglement as simply committing (without knowledge) to a random outcome, instead of "spooky action at a distance"?


"Spooky Action at a Distance" (nonlocality) is not the only alternative consistent with the facts. But it is probably the more popular one.

The answer to your question is that Bell's Theorem demonstrates that there is a mathematical relationship with the outcomes of measurements of entangled particles that is inconsistent with the idea that they are independent and random. Of course, the actual outcomes themselves are random when looked at separately. But when the outcome streams are correlated, the pattern becomes clear.

Specifically: the correlation of the outcomes follows the formula C=cos^2(theta) where theta is the relative angle between the measurement apparati. On the other hand, the formula associated with your hypothesis is C=.25+(cos^2(theta)/2). Experiments support the first formula - the one which is derived from Quantum Mechanics - and unambiguously reject the second.
 


Dragonfall said:
Why can't we think of entanglement as simply committing (without knowledge) to a random outcome ...
Because in the global experimental design(s) characteristic of EPR-Bell tests the pairing of individual results (A,B) isn't done randomly. There's a very narrow (nanosecond scale) window within which the coincidence circuitry operates to produce pairs. The effect of such synchronization is that for an individual detection in, say, A's datastream, there should be, at most, one candidate (either a detection or a nondetection attribute) for pairing in B's datastream.

This interdependency between paired detection events at A and B is a function of the experimental designs necessary to produce EPR-Bell type entanglements and has, as far as I can tell, nothing to do with instantaneous or FTL transmissions.

If FTL transmissions really aren't involved, then any symbolic locality condition becomes simply a statistical independence condition, and this is just a byproduct of the experimental design.

For this reason, and also simply because there's no physical evidence for FTL transmissions, the best assumption is that FTL transmissions aren't involved in the production of quantum entanglement.
Dragonfall said:
... instead of "spooky action at a distance"?
As Dr. Chinese has pointed out, one doesn't have to attribute the observed correlation between the angular difference of the spatially separated polarizer settings and the rate of coincidental detection to "spooky action at a distance" -- or even to FTL transmissions.

For example, if, for any given coincidence interval, it's assumed that the polarizers at A and B interacted with the same incident disturbance, then it isn't difficult to understand the cos^2 angular dependence.
 


Hi! I'm new here and interested in physics, as all of you are :) I'm speaking from a point of view of an amateur (hoping to change this in the future :)) sp I hope you won't lough too much at my contributions :D

The quantum entanglement is a process, bounding two or more particles together through space and time (if I've understood the definition correctly) and every change it the quantum state of the first particle leads simultaneously to the same change in the paired one, regardless of space and time.

Now I was wondering, if this means transmitting information (by quantum states) with superlight velocity? Can this one day be used for transmitting information more efficiently (actually instantaneously) trough bigger distances? And what does a quantum state represents, which features does it have?

best regards, Marin
 


Marin said:
Hi! I'm new here and interested in physics, as all of you are :) I'm speaking from a point of view of an amateur (hoping to change this in the future :)) sp I hope you won't lough too much at my contributions :D

The quantum entanglement is a process, bounding two or more particles together through space and time (if I've understood the definition correctly) and every change it the quantum state of the first particle leads simultaneously to the same change in the paired one, regardless of space and time.

Now I was wondering, if this means transmitting information (by quantum states) with superlight velocity? Can this one day be used for transmitting information more efficiently (actually instantaneously) trough bigger distances? And what does a quantum state represents, which features does it have?

best regards, Marin

No it does not mean FTL communications or signals. One can only say that there is an FTL "influence".
 


Because in the global experimental design(s) characteristic of EPR-Bell tests the pairing of individual results (A,B) isn't done randomly. There's a very narrow (nanosecond scale) window within which the coincidence circuitry operates to produce pairs. The effect of such synchronization is that for an individual detection in, say, A's datastream, there should be, at most, one candidate (either a detection or a nondetection attribute) for pairing in B's datastream.

This interdependency between paired detection events at A and B is a function of the experimental designs necessary to produce EPR-Bell type entanglements and has, as far as I can tell, nothing to do with instantaneous or FTL transmissions.

If FTL transmissions really aren't involved, then any symbolic locality condition becomes simply a statistical independence condition, and this is just a byproduct of the experimental design.

For this reason, and also simply because there's no physical evidence for FTL transmissions, the best assumption is that FTL transmissions aren't involved in the production of quantum entanglement.
I'm not sure I follow your argument perfectly, but it sounds as though you're saying that the coincidence circuitry may be the source of the correlations, and not anything occurring with or between the entangled particles. If that were true, then any two particles would produce the correlations, not just entangled particles.

Perhaps I'm misunderstanding your argument.
 


peter0302 said:
I'm not sure I follow your argument perfectly, but it sounds as though you're saying that the coincidence circuitry may be the source of the correlations, and not anything occurring with or between the entangled particles. If that were true, then any two particles would produce the correlations, not just entangled particles.

Perhaps I'm misunderstanding your argument.

I was in a hurry, as I am now. :smile: Sorry for any misunderstanding.

My understanding is that it's assumed, at least tacitly, that in EPR-Bell experiments the source of the entanglement at the quantum level is, eg., emission from the same atom of two opposite-moving disturbances, or transmission of an identical torque to two spatially separated particles or groups of particles, or direct interaction of two particles, or however else one might produce spatially separated, yet identical, properties to correlate with respect to one global operation or another.

The data correlations themselves are indeed produced by the experimenters via the experimental design. But yes, it's presumed that the deep cause of the correlations is whatever is happening at the quantum level.
 


ThomasT said:
I was in a hurry, as I am now. :smile: Sorry for any misunderstanding.

My understanding is that it's assumed, at least tacitly, that in EPR-Bell experiments the source of the entanglement at the quantum level is, eg., emission from the same atom of two opposite-moving disturbances, or transmission of an identical torque to two spatially separated particles or groups of particles, or direct interaction of two particles, or however else one might produce spatially separated, yet identical, properties to correlate with respect to one global operation or another.

The data correlations themselves are indeed produced by the experimenters via the experimental design. But yes, it's presumed that the deep cause of the correlations is whatever is happening at the quantum level.

It sounds like you are saying that EPR correlations are obtained by a common property of the two emitted particles ("torque", or "disturbance" or whatever). In other words, that in EPR correlations, one simply finds back the correlation of the common properties the source of the two particles has induced in them. A bit like the source is a vegetable chopper, and it cuts vegetables in two pieces to send them off to two different locations. It randomly picks vegetables (say, a salad, a tomato, a cucumber), but then it takes, say, a salad, cuts it in two pieces, and sends off half a salad to Alice and to Bob. Alice by herself sees randomly the arrival of half a salad, half a tomato, half a cucumber, ... and Bob too, but of course when we compare their results, each time Alice had half a salad, Bob also had half a salad etc...
Is that what you mean ?
 


I don't know if that's what he meant, but that's what I meant.
 
  • #10


My understanding is that it's assumed, at least tacitly, that in EPR-Bell experiments the source of the entanglement at the quantum level is, eg., emission from the same atom of two opposite-moving disturbances, or transmission of an identical torque to two spatially separated particles or groups of particles, or direct interaction of two particles, or however else one might produce spatially separated, yet identical, properties to correlate with respect to one global operation or another.
Yes.


The data correlations themselves are indeed produced by the experimenters via the experimental design. But yes, it's presumed that the deep cause of the correlations is whatever is happening at the quantum level.
Ok. On one level I agree that the correlations are not evident until the results of measurements are compared using the coincidence circuitry. But the coincidence circuitry merely compares two measurements that have already been made - it does not fabricate the results. If we extrapolate back in time to attempt to discern what happened, we cannot account for the fact that the two measurement events are outside one anotehrs' light cones. How then did the correlation occur?

The contenders have always been:
- Superdeterminism: the entire system, including the experimental components, was pre-ordained to act the way it did, and all conspired to produce the results we see.

- Hidden variables: there was something hidden in the particles that we couldn't detect that determined the outcome. Bell disproved naive hidden variable theories but more sophisticated ones such as Bohm still are popular among some.

- Many Worlds: Photon A splits in two at the polarizer, and Photon B splits in two at the other polarizer, and when both reach the coincidence counter, a total of four worlds are created, and the odds of being in anyone of those four is governed by Malus' law depending on the difference in angles between the polarizers

- Copenhagen: the two photons going through the polarizers isn't actually a measurement, for the experimenter, because it hasn't been observed yet by him. So the wave function hasn't collapsed, and the system continues to evolve in the superpositioned state until both measurements have been observed by the same observer. (Unfortunately this doesn't account for the fact that two experimenters could independently view the results of their respective photons, meet, and then compare notes - each believes that he caused the other's wavefunction to collapse. Who's right?) The fact that wavefunction collapse has no objective and logically self-consistent definition is CI's greatest failing IMO.

If I understand you right, you're arguing for superdeterminism?
 
  • #11


vanesch said:
It sounds like you are saying that EPR correlations are obtained by a common property of the two emitted particles ("torque", or "disturbance" or whatever). In other words, that in EPR correlations, one simply finds back the correlation of the common properties the source of the two particles has induced in them. A bit like the source is a vegetable chopper, and it cuts vegetables in two pieces to send them off to two different locations. It randomly picks vegetables (say, a salad, a tomato, a cucumber), but then it takes, say, a salad, cuts it in two pieces, and sends off half a salad to Alice and to Bob. Alice by herself sees randomly the arrival of half a salad, half a tomato, half a cucumber, ... and Bob too, but of course when we compare their results, each time Alice had half a salad, Bob also had half a salad etc...
Is that what you mean ?
Alice and Bob's Salad instead of Bertlmann's socks, right? :smile:
 
  • #12


peter0302 said:
Alice and Bob's Salad instead of Bertlmann's socks, right? :smile:

Yup :approve: o:)
 
  • #13


vanesch said:
It sounds like you are saying that EPR correlations are obtained by a common property of the two emitted particles ("torque", or "disturbance" or whatever).
Is that what you mean ?
I thought my statement was pretty clear. :smile: Maybe not. I'm in a hurry just now, but will return to reply to your and peter's questions in an hour or so.
 
  • #14


vanesch said:
It sounds like you are saying that EPR correlations are obtained by a common property of the two emitted particles ("torque", or "disturbance" or whatever). In other words, that in EPR correlations, one simply finds back the correlation of the common properties the source of the two particles has induced in them. A bit like the source is a vegetable chopper, and it cuts vegetables in two pieces to send them off to two different locations. It randomly picks vegetables (say, a salad, a tomato, a cucumber), but then it takes, say, a salad, cuts it in two pieces, and sends off half a salad to Alice and to Bob. Alice by herself sees randomly the arrival of half a salad, half a tomato, half a cucumber, ... and Bob too, but of course when we compare their results, each time Alice had half a salad, Bob also had half a salad etc...
Is that what you mean ?
I'd say it's more like a Caesar salad without the anchovies. Just kidding. :rolleyes:

Here's what I said:
My understanding is that it's assumed, at least tacitly, that in EPR-Bell experiments the source of the entanglement at the quantum level is, eg., emission from the same atom of two opposite-moving disturbances, or transmission of an identical torque to two spatially separated particles or groups of particles, or direct interaction of two particles, or however else one might produce spatially separated, yet identical, properties to correlate with respect to one global operation or another.

The above statement pertains to the models and experimental designs (that I've seen) involved in producing quantum entanglements.

This is, and will likely forever remain, an assumption regarding what is actually happening at the quantum level. Nevertheless, this assumption of "common property [or properties] of the two emitted particles [or spatially separated groups of particles]" is an integral part of the designs of the experiments that produce entangled data (eg., correlations of the type gotten via typical optical Bell tests).

This my conceptual understanding of the nature of quantum entanglement, and it's the way that at least some of the people that do the experiments that produce quantum entanglement think about it. And, there's simply no reason in the first place to entertain the idea that quantum entanglement has anything to do with FTL propagation of anything.
 
  • #15


peter0302 said:
Ok. On one level I agree that the correlations are not evident until the results of measurements are compared using the coincidence circuitry. But the coincidence circuitry merely compares two measurements that have already been made - it does not fabricate the results. If we extrapolate back in time to attempt to discern what happened, we cannot account for the fact that the two measurement events are outside one anotehrs' light cones. How then did the correlation occur?
Zap two spatially separated groups of atoms in an identical way, and the two groups of atoms are entangled with respect to the common motional properties induced by the common zapping.

Two opposite-moving optical disturbances emitted at the same time by the same atom are entangled with respect to the motion of the atom at the time of emission.

The experimental correlations are produced by analyzing the entangled properties with identical instruments via a global experimental design.

peter0302 said:
The contenders have always been:
- Superdeterminism: the entire system, including the experimental components, was pre-ordained to act the way it did, and all conspired to produce the results we see.

- Hidden variables: there was something hidden in the particles that we couldn't detect that determined the outcome. Bell disproved naive hidden variable theories but more sophisticated ones such as Bohm still are popular among some.

- Many Worlds: Photon A splits in two at the polarizer, and Photon B splits in two at the other polarizer, and when both reach the coincidence counter, a total of four worlds are created, and the odds of being in anyone of those four is governed by Malus' law depending on the difference in angles between the polarizers

- Copenhagen: the two photons going through the polarizers isn't actually a measurement, for the experimenter, because it hasn't been observed yet by him. So the wave function hasn't collapsed, and the system continues to evolve in the superpositioned state until both measurements have been observed by the same observer. (Unfortunately this doesn't account for the fact that two experimenters could independently view the results of their respective photons, meet, and then compare notes - each believes that he caused the other's wavefunction to collapse. Who's right?) The fact that wavefunction collapse has no objective and logically self-consistent definition is CI's greatest failing IMO.

If I understand you right, you're arguing for superdeterminism?

I think the assumption of some sort of determinism underlies all science.

My understanding of wave function collapse via the CI is that once an individual qualitative result is recorded, then all of the terms of the wave function that don't pertain to that result are discarded. In this sense, the wave function describes, quantitatively, the behavior of the experimental instruments.

I don't see what isn't objective or self-consistent about the CI.

The essence of the CI, as I see it, is that statements regarding events and behavior that haven't been observed are speculative. Objective science begins at the instrumental level.
Hence a fundamental quantum of action, and limitations on what we can ever possibly know.
 
  • #16


No, you're trapped in a classical understandng of entanglement.

Bell's theorem proves that there is no actual property that can be common to the entangled particles before their detection that can account for the correlations that we see. It can get close, but not all the way there.

It turns out that the probability of joint detection is dependent solely on the difference in angle between the two polarizers. Moreover, it works even if the polarizer angles are set a nanosecond before detection. There's nothing about the experimental set up that could cause that. There's no conceiveable hidden variable scheme that could cause the photons to behave that way. It is as though they "know" what the other polarizer angle was.

I think the assumption of some sort of determinism underlies all science.
ACK. No, that's the whole point! It underlies all of *your* *common* *sense*.

My understanding of wave function collapse via the CI is that once an individual qualitative result is recorded, then all of the terms of the wave function that don't pertain to that result are discarded. In this sense, the wave function describes, quantitatively, the behavior of the experimental instruments.

I don't see what isn't objective or self-consistent about the CI.
Define a qualitative result being recorded. By whom? By a computer? A person? A cat? It's subjective. There is no consistent definiton of observer or observation. It all depends on the experiment. Heisenberg even said this. He said the quantum/classical divide depends on the epxeriment. It's not an objective process. And it's not well understood (in CI). The only thing that is well understood is how to calculate the odds.
 
  • #17


ThomasT said:
I'd say it's more like a Caesar salad without the anchovies. Just kidding. :rolleyes:

Here's what I said:
My understanding is that it's assumed, at least tacitly, that in EPR-Bell experiments the source of the entanglement at the quantum level is, eg., emission from the same atom of two opposite-moving disturbances, or transmission of an identical torque to two spatially separated particles or groups of particles, or direct interaction of two particles, or however else one might produce spatially separated, yet identical, properties to correlate with respect to one global operation or another.

The above statement pertains to the models and experimental designs (that I've seen) involved in producing quantum entanglements.

I'm trying to find out whether you understood the difficulty presented by Bell's theorem or not. If you think that the correlations found in the outcomes in EPR experiments are due to a common property to the two particles, in other words, because the two particles are, say, identical copies of one another, determined by the fact that they have been emitted by the same source (the same atom or so), and hence have random, but identical spin each time, or something else, then:
1) that wouldn't have surprised anybody
2) you have not understood Bell's theorem.

The surprise resides in the fact that the correlations found cannot be explained that way: numerically they don't fit. With the half-a-vegetable emitter, you cannot obtain the same correlations as those of an EPR experiment. That's exactly the content of Bell's theorem. Of course you can find the perfect correlations in the case of identical analysis. That's no surprise. That's like "each time bob finds half a salad, alice finds half a salad too". Easy. That's because they came from the same source: the chopper.
The crazy thing about EPR results is something like: AT THE SAME TIME, we also have: "each time bob finds a salad, the color of Alice's vegetable is random".
That's kind of impossible with our chopper: each time bob finds a salad, Alice was supposed to find a salad too, so if she decided not to look at the kind of vegetable, but rather at its color, she should have found systematically "green". Well, no. She finds red or dark green/blue also.

Now, maybe you know this, but then I don't understand your statements, which then sound tautological to me: "particles show entangled behavior because they became entangled at their source". Sure. But that doesn't explain the "paradoxial" correlations AND lack of correlation at the same time.
 
  • #18


Yes exactly. Each time Alice eats a tomato, Bob is more likely to eat a cucumber. Each time Alice can't finish her broccoli, Bob eats his carrots more often.

Those are the types of wacky correlations that entanglement produces. Yes, yes I suppose you could construct very elaborate explanations for all that. Fortunately Bell proved mathematically that NO explanation can work.
 
  • #19


There are some loopholes in Bell's theorem. The most obvious one is the assumption that even though the theory is assumed to be deterministic, we can assume that the observer can choose the experimental set up at will. This is impossible, because if the observer had "free will" that would violate determinism.

This is discussed in detail http://arxiv.org/abs/quant-ph/0701097"
 
Last edited by a moderator:
  • #20


Count Iblis said:
There are some loopholes in Bell's theorem. The most obvious one is the assumption that even though the theory is assumed to be deterministic, we can assume that the observer can choose the experimental set up at will. This is impossible, because if the observer had "free will" that would violate determinism.

This is discussed in detail http://arxiv.org/abs/quant-ph/0701097"

Because it is t'Hooft saying it, it gets more visibility that an article like this otherwise would. But there is plenty to criticize, and the idea of "superdeterminism" is not considered a loophole in Bell's Theorem. Keep in mind that the essential question is whether a local deterministic theory can yield predictions equivalent to QM. A superdeterministic theory comes no closer!

Keep in mind that such a theory comes with its own rather substantial baggage. It would be somewhat like saying that Bell's Theorem is flawed because you believe in God, and Bell's Theorem tacitly assumes there is no God. I think we can all acknowledge that if there is some unseen force that changes the results of only the experiments we perform to have different values than they really are - then we will be blissfully unaware of this and have incorrect scientific theories. Except that these "incorrect" theories will still work and be useful "as if" they were correct all along.

(Side comment: I guess the Pythagorean Theory is wrong similarly.)

The fact is that even if our choice of measurements is pre-determined because we don't have free will, that in no ways explains why the results match QM's predictions and not those of local realistic theories.
 
Last edited by a moderator:
  • #21


Question about the derivation of Bell's theorem. He assumes, does he not, not only the freedom to choose experimental conditions, but also the freedom to choose any continuous real value for the parameters of those conditions. In other words in his derivations he clearly uses integral calculus, and so naturally he's assuming continuity in the possible values. (And I've seen dumbed down derivations too but they still inherently assume arbitrary freedom to choose parameters).

But we all know that quantum values are not continuous, they are quantized - multiples or half multiples of hbar for example. So we can't actually choose any arbitrary value for our polarizer measurement; we are slightly restricted (we don't have *totally* free will). If Bell's theorem is re-derived using discontinuous summations instead of continuous integration, do the inequalities still come out the same?

Does the question even make any sense?

On a side note, I have read some of the papers on scale relativity, which, for those of you who are unfamiliar with it, states that no length measurement, regardless of reference frame or scale, will ever be less than the Planck Length (similar to the invariance of the speed of light). The author claims to derive the schrodinger equation and other postulates of QM using this theory. Some have dismissed it as "numerology" but I find it remarkable how much of QM drops into place once you quantize spacetime. Thus I have to wonder if EPR bell experiments can likewise be accounted for by quantizing polarization.
 
  • #22


From 't Hooft's paper:
It is easy to argue that, even the best conceivable computer, cannot compute ahead of time what Mr. Conway will do, simply because Nature does her own calculations much faster than any man-made construction, made out of parts existing in Nature, can ever do. There is no need to demand for more free will than that.
This, to me, is the coup de grace. I've said this many times myself - you need more than the entire universe in order to observe/predict the entire universe (whether in terms of speed or size or accuracy). If you're using billiard balls to measure the location of billiard balls, you can never know the location of ALL the billiard balls or know their location more precisely than the diameter of a billiard ball!

But that's just an intuitive derivation of the HUP. And even given that uncertainty in billiard ball positions, they don't behave like waves, nor do they exhibit entanglement.
 
  • #23


peter0302 said:
No, you're trapped in a classical understandng of entanglement.
Yes, my understanding of entanglement has a classical basis.

peter0302 said:
Bell's theorem proves that there is no actual property that can be common to the entangled particles before their detection that can account for the correlations that we see.
I don't think Bell's theorem proves anything about nature. Bell inequalities are simply arithmetic expressions. (with respect to N properties of the members of some population a certain numerical relationship will always hold).

peter0302 said:
It turns out that the probability of joint detection is dependent solely on the difference in angle between the two polarizers.
Yes, but only if the experimental design matches up the data sequences at A and B according to the assumption of common (prior to filtration) cause -- in other words, the assumption that what is getting analyzed at A during a certain interval is identical to what is getting analyzed at B during that same interval.

peter0302 said:
Moreover, it works even if the polarizer angles are set a nanosecond before detection.
Of course, why shouldn't it? There's always one, and only one, angular difference associated with any given pair of detection attributes.

peter0302 said:
There's nothing about the experimental set up that could cause that.
Apparently there is something about the experimental setup that causes it, because it's been reproduced at least hundreds of times.

peter0302 said:
There's no conceiveable hidden variable scheme that could cause the photons to behave that way. It is as though they "know" what the other polarizer angle was.
If A and B are analyzing the same thing, then it's easily understandable. If they aren't, then it's a complete mystery.
Note that we can't say anything more about what's being analyzed except in accordance with the experimental design. So, if you're doing optical Bell tests using polarizers, then nothing can be said about the polarization of photons prior to production at the detectors. But the working assumption is that opposite-moving, polarizer-incident disturbances associated with paired attributes are identical.

peter0302 said:
Define a qualitative result being recorded. By whom? By a computer? A person? A cat? It's subjective. There is no consistent definiton of observer or observation. It all depends on the experiment. Heisenberg even said this. He said the quantum/classical divide depends on the epxeriment. It's not an objective process. And it's not well understood (in CI). The only thing that is well understood is how to calculate the odds.
Just because the decision about where to draw the line can seem a bit arbitrary at times doesn't mean that, once the line is drawn, or once a qualitative result is recorded, it's not objective.

The CI is the super-realistic, instrumentalist interpretation of quantum theory -- and therefore the most objective way to look at it.
 
  • #24


ThomasT said:
I don't think Bell's theorem proves anything about nature. Bell inequalities are simply arithmetic expressions. (with respect to N properties of the members of some population a certain numerical relationship will always hold).

Well, here's the surprise: those inequalities are violated by:
1) quantum mechanical predictions
2) experimental results of an ideal EPR experiment.

As you said, they should normally hold. They don't. That means that you CANNOT find a set of properties that "predict" the results, as those should, as you correctly point out, satisfy numerical relationships that will always hold.

This is as shocking as the following: there's a theorem that says that if you have two sets of objects, and you count the set of objects in the first set, and you find m, and you count the set of objects in the second set, and you find n, then if you count the set of objects in both sets, you should find, well, n + m. You now take EPR-marbles in two bags. You count the marbles in the first bag and you find 5. You count the marbles in the second bag and you find 3. You count the marbles in the first bag and then in the second bag, and you find 6. Huh ? THAT's the surprise. An "obvious" arithmetic property simply doesn't hold.

Yes, but only if the experimental design matches up the data sequences at A and B according to the assumption of common (prior to filtration) cause -- in other words, the assumption that what is getting analyzed at A during a certain interval is identical to what is getting analyzed at B during that same interval.

It would even be more surprising if the correlations even helt between data that were NOT matched up!

Of course, why shouldn't it? There's always one, and only one, angular difference associated with any given pair of detection attributes.

The point is that each detection phenomenon individually, doesn't "know" what was the setting at the other side. So the only way to "correlate" with this difference is that we are measuring a common property of the two objects. Well, it turns out that correlations due to common properties have to obey the arithmetic inequalities that Bell found out, and lo and behold, the actual correlations that are observed, and that are predicted by quantum mechanics do NOT satisfy those arithmetic inequalities.

Apparently there is something about the experimental setup that causes it, because it's been reproduced at least hundreds of times.

Point is that it can't be something that comes from the source! That's the difficult part.

If A and B are analyzing the same thing, then it's easily understandable.

No, it isn't. If they were analysing the same thing, Bell's arithmetic inequalities should hold. And they don't in this case.


Note that we can't say anything more about what's being analyzed except in accordance with the experimental design. So, if you're doing optical Bell tests using polarizers, then nothing can be said about the polarization of photons prior to production at the detectors. But the working assumption is that opposite-moving, polarizer-incident disturbances associated with paired attributes are identical.

No, not even that. No property emitted from the source could produce the observed correlations. Again, because if they did, they should follow the Bell inequalities which are, as you correctly point out, nothing else but a arithmetic expressions which (should) hold for any set of N properties (emitted from the source). And they don't.
 
  • #25


peter0302 said:
Question about the derivation of Bell's theorem. He assumes, does he not, not only the freedom to choose experimental conditions, but also the freedom to choose any continuous real value for the parameters of those conditions. In other words in his derivations he clearly uses integral calculus, and so naturally he's assuming continuity in the possible values. (And I've seen dumbed down derivations too but they still inherently assume arbitrary freedom to choose parameters).

But we all know that quantum values are not continuous, they are quantized - multiples or half multiples of hbar for example. So we can't actually choose any arbitrary value for our polarizer measurement; we are slightly restricted (we don't have *totally* free will). If Bell's theorem is re-derived using discontinuous summations instead of continuous integration, do the inequalities still come out the same?

Does the question even make any sense?

On a side note, I have read some of the papers on scale relativity, which, for those of you who are unfamiliar with it, states that no length measurement, regardless of reference frame or scale, will ever be less than the Planck Length (similar to the invariance of the speed of light). The author claims to derive the schrodinger equation and other postulates of QM using this theory. Some have dismissed it as "numerology" but I find it remarkable how much of QM drops into place once you quantize spacetime. Thus I have to wonder if EPR bell experiments can likewise be accounted for by quantizing polarization.

It is not really necessary for there to be free will for the inequality to be violated. You don't really need to make last minute polarizer setting choices. It just makes it easier to see that there is no signal influence between observers that accounts for the results.

If you set Alice & Bob's polarizers at 120 degrees apart (say Alice at 0 and Bob at 120 degrees) or -120 degrees apart (Alice at 0 and Bob at 240 degrees), you get the same coincidence results, 25% match. Classically, it cannot be less than 33.3% if you expect internal consistency at those angle settings. That is Bell's Theorem (Mermin variation).

If you try to put together a possible set of results for angles A, B and C (all 120 degrees apart) you will quickly see that it cannot be made to yield results consistent with experiment. So you will see that free will is NOT an assumption of Bell's Theorem.

A better question is to ask whether Fair Sampling is a valid assumption. And the answer to that is similar to the answer of any scientific experiment. All scientific theory is essentially an extrapolation of the results of a finite series of scientific experiments. Where those experiments to be biased in some way, it is possible we could find that relativity is wrong, as are all theories. But that is part and parcel of the scientific method and has nothing to do with Bell's Theorem. So I consider such discussion more of a philosophical topic rather than a topic specific to entanglement.

If you are interested in that philosophical topic, I might recommend "How the Laws of Physics Lie" by Nancy Cartwright. It is excellent.
 
  • #26


Ok, let me put it a different way. Forget about "free will." Let's just focus on the quantization of spin. Does Bell's inequality hold just as easily for discontinuous functions as for continuous ones?
 
  • #27


peter0302 said:
Ok, let me put it a different way. Forget about "free will." Let's just focus on the quantization of spin. Does Bell's inequality hold just as easily for discontinuous functions as for continuous ones?

It relates to the QM formula, which is continuous. If you want to posit a discontinuous function or discontinuous underlying reality, then you will want some experimental support for it. There isn't anything like that at this time.

However, spin is quantized in one critical sense. It is basically -1 or +1 (can be scaled from 0 to 1 as well) at any point in time that you choose to measure it. Once it is measured, it then takes on an angular value and all non-commuting components (relative to that angular value) are now reset to some random value (-1 or +1). For spin 1 particles, non-commuting components are offset 45 degrees. For spin 1/2 particles, non-commuting components are offset 90 degrees.
 
  • #28


It relates to the QM formula, which is continuous. If you want to posit a discontinuous function or discontinuous underlying reality, then you will want some experimental support for it. There isn't anything like that at this time.
Perhaps not, but if postulating one solves all of the interpretive difficulties, such as the way scale relativity claims to, then it might be quite useful.
 
  • #29


vanesch said:
Now, maybe you know this, but then I don't understand your statements, which then sound tautological to me: "particles show entangled behavior because they became entangled at their source". Sure. But that doesn't explain the "paradoxial" correlations AND lack of correlation at the same time.
Bell's theorem would have us expecting that the intensity of light transmitted via crossed polarizers should vary as a linear function of the angular difference between the polarizers.
On the other hand we see in EPR-Bell tests as well as in classical polariscopic setups that the intensity of light transmitted via crossed polarizers varies as cos^2 the angular difference.

This says to me that there is a relationship, a connection between what is happening in optical Bell tests to produce the observed correlations and what is happening in a classical polariscopic setups to produce the observed Malus Law angular dependency.

Granted that the numbers are a bit different, but I'm trying for a conceptual understanding of what is happening to produce the EPR-Bell correlations.

The intensity of the light transmitted by the analyzing polarizer in a polariscopic setup corresponds to the rate of coincidental detection in an EPR-Bell setup. Propagating between the polarizers in both setups is an identical optical disturbance. In a polariscopic setup, the light that is transmitted by the first polarizer is identical to the light that is incident on the analyzing polarizer. In an EPR-Bell setup, the light incident on the polarizer at A is identical to the light incident on the polarizer at B during any given coincidence interval.
 
  • #30


vanesch said:
THAT's the surprise. An "obvious" arithmetic property simply doesn't hold.
I would expect the correlation between angular difference and rate of coincidental detection to be linear only if (1) my formulation manifested the assumption of common emission cause and (2) my "theorem" also assumed that common emission cause must produce a linear functional relationship between the correlated variables.

vanesch said:
It would even be more surprising if the correlations even helt between data that were NOT matched up!
Yes, but that doesn't happen, does it? And the fact that the data does have to be very carefully matched according to the assumption of common emission cause is further support for the assumption that there is a common emission cause.

vanesch said:
Point is that it can't be something that comes from the source! That's the difficult part.
It might seem that it's even more difficult to see why Bell's theorem doesn't rule out emission produced or pre-polarization entanglement. But it isn't, and it doesn't.

vanesch said:
If they were analysing the same thing, Bell's arithmetic inequalities should hold.
Not necessarily.

vanesch said:
No property emitted from the source could produce the observed correlations.
And yet this is what's generally assumed to be happening. The models are based on common properties emitted from the source. As far as I know, there aren't any working models based on FTL or instantaneous action at a distance assumptions that are used by experimenters in designing and doing EPR-Bell experiments.
 

Similar threads

  • · Replies 8 ·
Replies
8
Views
1K
  • · Replies 18 ·
Replies
18
Views
1K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 4 ·
Replies
4
Views
667
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 8 ·
Replies
8
Views
1K
  • · Replies 112 ·
4
Replies
112
Views
12K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K