Is entanglement still considered spooky ?

  • Thread starter Karl Coryat
  • Start date
  • Tags
    Entanglement
In summary, it's still mysterious to some physicists how entanglement can occur between particles that are spatially separated.
  • #1
Karl Coryat
104
3
Is entanglement still considered "spooky"?

Even now in 2012, you often hear the phrase "spooky action at a distance" regarding entanglement. Is this just a popular-science cliché, or do some physicists really consider EPR effects (e.g. Alice sees spin-up, therefore Bob must see spin-down) somehow mysterious?

This may be my naïve view, but I see this phenomenon as a conservation issue, a generalization of Newton's 3rd law, where the "action" and "reaction" in this case are spacelike separated. What subtlety am I missing by viewing a Bell-test result as analogous to a classical cannon shot over here and a recoil over there?

To put it another way: Even if no hidden variables are involved, and regardless of distance, is anyone surprised that the universe constrains Alice and Bob's experiments to abide by ordinary laws of conservation?
 
Physics news on Phys.org
  • #2


Karl Coryat said:
To put it another way: Even if no hidden variables are involved, and regardless of distance, is anyone surprised
Surprised, no. Mystified, yes.
Karl Coryat said:
that the universe constrains Alice and Bob's experiments to abide by ordinary laws of conservation?
The mystery is: how??
 
  • #3


Exactly, we may know what happens but why is it happening? Spooky indeed..
 
  • #4


One particularly aspect I'm not clear on: after making a measurement and checking the other particle, they have then remeasured the first again, observed it has changed and then that the second has also changed?
 
  • #5
salvestrom, for an excellent explanation of what entanglement is all about and what makes it so weird, see this article by Nick Herbert:http://quantumtantra.com/bell2.html

Most people, when they first hear about entanglement, make the same assumption you did, that it's just conservation laws, e.g. one particle has one spin, then conservation of angular momentum would lead to the other particle having the other path. But if this was really what entanglement was about, there would be nothing "quantum" about it. You could replicate this trivial kind of "entanglement" classically by writing your name on a piece of paper, cutting the paper in half and randomly putting the two halves in separate envelopes that are sent to distant locations. Then if one of the envelopes is found to contain the left half of the paper, the other one is guaranteed to have the right half: you can call it "conservation of paper".

No, this is not at all the nature of quantum entanglement, which is a much more subtle and mysterious phenomenon. To find out the difficulties encountered if you viewed entanglement as similar to my envelopes example, see the link I referred to above.
 
  • #6


Just imagine if we had quantum entaglement technology. You could literally make a cell phone call from anywhere in the universe!
 
  • #7


Antientrophy said:
Just imagine if we had quantum entaglement technology. You could literally make a cell phone call from anywhere in the universe!
No you couldn't
 
  • #8
Antientrophy said:
Just imagine if we had quantum entaglement technology. You could literally make a cell phone call from anywhere in the universe!
No, the same theory of quantum mechanics that tells us about the wonders of entanglement also tell us that it cannot be used to transmit information faster than possible by other means (i.e. the speed of light).
 
  • #9


Thanks for the responses. I've always found the folks here to be very helpful.

I guess I just don't take the idea of spatial separation very seriously. The holographic picture appeals to me. On FQXi recently there was a neat argument, based on quantum cryptography, for why the idea of spatial positioning is deeply problematic:
http://www.fqxi.org/community/forum/topic/1238
 
  • #10


easyrider said:
Exactly, we may know what happens but why is it happening? Spooky indeed..

The why is... "Something must happen" otherwise there is an abscence of interaction & and that something is the why, built out of a unique set of varables.
 
  • #11
lugita15 said:
salvestrom, for an excellent explanation of what entanglement is all about and what makes it so weird, see this article by Nick Herbert:http://quantumtantra.com/bell2.html

Most people, when they first hear about entanglement, make the same assumption you did, that it's just conservation laws, e.g. one particle has one spin, then conservation of angular momentum would lead to the other particle having the other path. But if this was really what entanglement was about, there would be nothing "quantum" about it. You could replicate this trivial kind of "entanglement" classically by writing your name on a piece of paper, cutting the paper in half and randomly putting the two halves in separate envelopes that are sent to distant locations. Then if one of the envelopes is found to contain the left half of the paper, the other one is guaranteed to have the right half: you can call it "conservation of paper".

No, this is not at all the nature of quantum entanglement, which is a much more subtle and mysterious phenomenon. To find out the difficulties encountered if you viewed entanglement as similar to my envelopes example, see the link I referred to above.

What does anyone else make of the above and the included link?

Personally, I find nothing mysterious about the described experiment, which is more like the envelope anology. Synchronised photons doesn't seem like much of an issue. My description which I thought was the actual implication of quantum entanglement would be genuinely weird, if true.

Nick Herbert's wikipage and own website are not encouraging. In the linked article he declares the non-linear jump from 25% to 75% inaccuracy as proof of non-localality without explaining why. Seems odd to conclude anything at all given there is a diagram showing how the relationship varies.

Does someone have a link to another rundown that doesn't, at the bottom, have a link to another page about placing limits on psychic powers?
 
  • #12


The distinction between the entanglement phenomenon and a classical situation (such as the envelope example) is subtle. What many pop-science writers don't go into is that a particle can be measured in any manner of ways, and its entangled twin will always turn up opposite. If you measure spin up along 0°, the twin will definitely be spin-down along 0°, or along 90° a 50-50 chance of being spin-up or spin-down. If you then measure another particle along 45°, its twin will definitely be opposite along 45° or 50-50 along 135°. And so on. So, in order for this to be happening in a classical manner (like the envelope), it seems as if each particle would have to carry an inordinate amount of information to account for any possible measurement result -- or, one particle would need to "signal" the other regarding the measurement that was just performed. Since neither explanation is satisfying (especially since Aspect and other tests show that such signaling would be superluminal), the phenomenon is deemed weird.

My original question was whether the physics community is still really "spooked," as various interpretations and the holographic principle claim to account for these things. For example, the fact that any "spooky" test results can only be verified through ordinary, local communication channels (as in relational QM).

A fairly extensive and readable run-down of entanglement, written by Jeffrey Bub, can be found here:
http://plato.stanford.edu/entries/qt-entangle/
 
Last edited:
  • #13


zsarbomba said:
The why is... "Something must happen" otherwise there is an abscence of interaction & and that something is the why, built out of a unique set of varables.

No one can ever answer the why of something only the how or how again.

I think something like that should be in the fundaments of physics wasking why never gets you anywhere, this is familiar to when we were childern. Every thime you get an answer and you can ask why again it never stops and doesn't lead to anymore real understanding
 
  • #14


Karl Coryat said:
The distinction between the entanglement phenomenon and a classical situation (such as the envelope example) is subtle. What many pop-science writers don't go into is that a particle can be measured in any manner of ways, and its entangled twin will always turn up opposite. If you measure spin up along 0°, the twin will definitely be spin-down along 0°, or along 90° a 50-50 chance of being spin-up or spin-down. If you then measure another particle along 45°, its twin will definitely be opposite along 45° or 50-50 along 135°. And so on. So, in order for this to be happening in a classical manner (like the envelope), it seems as if each particle would have to carry an inordinate amount of information to account for any possible measurement result -- or, one particle would need to "signal" the other regarding the measurement that was just performed. Since neither explanation is satisfying (especially since Aspect and other tests show that such signaling would be superluminal), the phenomenon is deemed "weird."

My original question was whether the physics community is still really "spooked," as various interpretations (and the holographic principle) claim to account for these things.

A fairly extensive and readable run-down of entanglement, written by Jeffrey Bub, can be found here:
http://plato.stanford.edu/entries/qt-entangle/

Turning up opposite doesn't seem particularly wierd. They're entangled, after all. What I'm trying to ascertain from all this is if it is being said, or has even been experimentally demonstrated, that you can remeasure the first particle, get a different result and then find that the second particle has also appropriately altered. That would be "wierd".
 
  • #15


You can't remeasure the particle and get a different result on the same observable -- the wavefunction is already "collapsed."

I agree with you that entanglement isn't particularly weird, but you kind of have to throw out the idea that spatial separation is a fundamental feature of the world. Not everyone is ready to do that.
 
  • #16


Karl Coryat said:
You can't remeasure the particle and get a different result on the same observable -- the wavefunction is already "collapsed."

I agree with you that entanglement isn't particularly weird, but you kind of have to throw out the idea that spatial separation is a fundamental feature of the world. Not everyone is ready to do that.

Hmm. See, I'd start by arguing from the corner that the wavefunction was collapsed at the point of entanglement, hence the outcome of any later measurement is pre-set. Unfortunately, I'm pretty sure that would be impossible to prove and ties in with other debates about the reality of the wave-function, its collapse, etc...
 
  • #17


Well if the wavefunction has collapsed at the point of entanglement, then it would have to collapse with an infinite number of observable values, which kind of wipes out the idea of collapse in the first place (a single, definite result for a particular observable).

This is because prior to measurement, the entangled particle has the potential to yield a measurement result along any axis, and its twin, measured along the same axis, simultaneously has the potential to be measured opposite. The choice of axis is totally arbitrary. This can't be explained with any classical-type picture.
 
  • #18
salvestrom said:
What does anyone else make of the above and the included link? (http://quantumtantra.com/bell2.html)
The part between "Here's how it works." and "Therefore the locality assumption is false. Reality must be non-local." is good. It really explains why people think that entanglement is weird. This is as simple as possible so if you do not get it try to ask more specific questions about this explanation.

But let me add that the story does not end here. That's because there are differences between theory and experiment that can make a difference.

salvestrom said:
In the linked article he declares the non-linear jump from 25% to 75% inaccuracy as proof of non-localality without explaining why.
Realistic treatment of that setup says you have 50% mismatch maximum in step 4 given that observations at step 1,2,3 are correct. Experiments suggest 75% mismatch in step 4 (and of course steps 1,2,3 as described).
 
  • #19
salvestrom said:
What does anyone else make of the above and the included link?

Personally, I find nothing mysterious about the described experiment, which is more like the envelope anology. Synchronised photons doesn't seem like much of an issue. My description which I thought was the actual implication of quantum entanglement would be genuinely weird, if true.

Nick Herbert's wikipage and own website are not encouraging. In the linked article he declares the non-linear jump from 25% to 75% inaccuracy as proof of non-localality without explaining why. Seems odd to conclude anything at all given there is a diagram showing how the relationship varies.

Does someone have a link to another rundown that doesn't, at the bottom, have a link to another page about placing limits on psychic powers?
But the whole point of Herbert's article was to show how you run into difficulties if you assume that entanglement has an envelope-type explanation. Let me summarize the proof for you.

If you send the two photons through detectors oriented at 0 degrees, you get 100% correlation, which might suggest to you that the polarizations of the two photons was set initially before they went apart, akin to the envelopes. If you turn one of the detectors by 30 degrees, you get a 25% error rate. No problem, you might think: it may just be because the fact that the detector has been turned leads to the polarization of the photon changing 1 out of every 4 times. So now turn both detectors by 30 degrees in opposite directions. Now they should each have a 25% error rate, so at most they should have an error rate of 50% (of course it should be even less than that, because if both of them have an error at the same time the two errors cancel each other out). Yet we find that the error rate is 75%, contradicting the conclusion we reached assuming an envelope-type explanation.

And yes, the rest of Herbert's website is rather weird, but this article is perfectly good. He has published this simple proof of Bell's theorem in journals, and in fact this particular numerical example involving 30 and 60 degrees was the one Bell himself used to use to demonstrate his theorem to popular audiences.
 
Last edited:
  • #20


conquest said:
No one can ever answer the why of something only the how or how again.

I think something like that should be in the fundaments of physics wasking why never gets you anywhere, this is familiar to when we were childern. Every thime you get an answer and you can ask why again it never stops and doesn't lead to anymore real understanding

I agree sometimes it can be the bain of things. MY G.F did it to me last night like a child over and over again, Why, Why, Why, & every time I gave an answer. Then it got to a point where I could not answer why! At that point I guess, that is the goal post that research is fumbling with. We can answer why to a certain point, then it gets tricky. Why did I stand up? Easy...Why did the weather change...easy..why does light travel? Hard.
 
  • #21


zonde said:
The part between "Here's how it works." and "Therefore the locality assumption is false. Reality must be non-local." is good. It really explains why people think that entanglement is weird. This is as simple as possible so if you do not get it try to ask more specific questions about this explanation.

But let me add that the story does not end here. That's because there are differences between theory and experiment that can make a difference.


Realistic treatment of that setup says you have 50% mismatch maximum in step 4 given that observations at step 1,2,3 are correct. Experiments suggest 75% mismatch in step 4 (and of course steps 1,2,3 as described).

Eh. I've read the article over a few times now and all I can get out of it was that the initial assumption that the error rate would be linear was false. In fact, because the relationship is non-linear and the 50% error occurs at 45°, the angle of 60° showing a 50% error would have been evidence of non-locality, anyway. The worse part for me is that, knowing in advance that the 50/50 spread of 0 and 1's occurs at 45°, it would make no sense to assume that a 50% error rate occurs at 60°. Is it 50% when Alice alone moves her SPOT 60° without Bob moving his? I'm assuming it's still 75% error.

EDiT: I also would assume that the ratio of 1's and 0's at 30 and 60 degrees is 3/4 during calibration.

Effectively, I don't feel that anything has been proven other than that the relationship between angle and error is non-linear.
 
Last edited:
  • #22
Ryan_m_b said:

As you might expect from a world of improbability, there are http://www.technologyreview.com/blog/arxiv/24759/
 
  • #23
questionpost said:
As you might expect from a world of improbability, there are http://www.technologyreview.com/blog/arxiv/24759/
Not so fast: quoting from the article, "The influence between the particles may be immediate, but the process does not violate relativity because some informatiom has to be sent classically at the speed of light."
 
  • #24


lugita15 said:
Not so fast: quoting from the article, "The influence between the particles may be immediate, but the process does not violate relativity because some informatiom has to be sent classically at the speed of light."

That would be true, but other members of this forum have corrected me in a more accurate way as to say it is a correlation, and that information is not actually "travelling" distance over time but mathematically correlating to different state which does not involve time. So your not actually "sending" information, your making different patterns mathematically true, which doesn't take time.
 
  • #25
salvestrom said:
Eh. I've read the article over a few times now and all I can get out of it was that the initial assumption that the error rate would be linear was false. In fact, because the relationship is non-linear and the 50% error occurs at 45°, the angle of 60° showing a 50% error would have been evidence of non-locality, anyway. The worse part for me is that, knowing in advance that the 50/50 spread of 0 and 1's occurs at 45°, it would make no sense to assume that a 50% error rate occurs at 60°. Is it 50% when Alice alone moves her SPOT 60° without Bob moving his? I'm assuming it's still 75% error.

EDiT: I also would assume that the ratio of 1's and 0's at 30 and 60 degrees is 3/4 during calibration.

Effectively, I don't feel that anything has been proven other than that the relationship between angle and error is non-linear.
No, it has nothing to do with assuming linearity; it has to do with the mathematics of probability. Suppose two people are at a bar, and a coin is flipped, and the next morning they are each asked whether it landed heads or tails. If neither person drinks the previous night, we find that they always give the same answer - so we conclude they are both giving the right answer about last night's coin flip. If one of them went drinking, we find that their answers differ 25% of the time, so we conclude that alcohol makes you have a 25% chance of giving the wrong answer. So now what happens if both people are drinking? Well, each one has a 25% chance of giving a wrong answer, so what is the probability that at least one of them gives the wrong answer? One out of four times the first person gives the wrong answer, and one out of four times the second person gives the wrong answer, so at most two out of four times a wrong answer is given. In other words, there is at most a 25%+25%=50% chance that at least one of them gives the wrong answer. And the probability that the two people give different answers is even less than that, because of course if both people give the wrong answer then their answers will be the same.

We have a local realist view called Coin Flip (CF) theory that says the two people are giving answers they planned in advance - perhaps based on a coin flip they did at the bar. As I showed above, CF theory makes a definite prediction - that the probability of the two people's answers differing when they had both been drinking is at most 50%. Yet we find that the probability turns out to be 75%. So we are led to conclude that CF theory is wrong. Does that make sense?
 
Last edited:
  • #26


salvestrom said:
Eh. I've read the article over a few times now and all I can get out of it was that the initial assumption that the error rate would be linear was false. In fact, because the relationship is non-linear and the 50% error occurs at 45°, the angle of 60° showing a 50% error would have been evidence of non-locality, anyway. The worse part for me is that, knowing in advance that the 50/50 spread of 0 and 1's occurs at 45°, it would make no sense to assume that a 50% error rate occurs at 60°. Is it 50% when Alice alone moves her SPOT 60° without Bob moving his? I'm assuming it's still 75% error.

EDiT: I also would assume that the ratio of 1's and 0's at 30 and 60 degrees is 3/4 during calibration.

Effectively, I don't feel that anything has been proven other than that the relationship between angle and error is non-linear.
Alice has four coins:
A1, A2, A3, A4
and Bob has four coins:
B1, B2, B3, B4

At "0°" measurement Alice sets some combination of heads and tails on her coins, say:
A1 - H
A2 - H
A3 - T
A4 - H

and Bob at "0°" measurement sets some combination of heads and tails on his coins.
Let's say that combination is exactly the same as for Alice:
B1 - H
B2 - H
B3 - T
B4 - H

So if we compare pairs A1/B1, A2/B2, A3/B3, A4/B4 they all show the same side (that would be 0% mismatches).

Now for "30°" measurement Alice changes her combination so that there is three matching pairs with Bob's "0°" measurement and one mismatch, say:
A1 - H
A2 - H
A3 - T
A4 - T

So we have that pair A4/B4 show mismatch (that would be 25% mismatches).

Now for "-30°" measurement Bob changes his combination so that there is three matching pairs with Alice's "0°" measurement and one mismatch, say:
B1 - T
B2 - H
B3 - T
B4 - H

So we have that pair A1/B1 show mismatch (that would be 25% mismatches).

And now we compare Alice's "30°" measurement with Bob's "-30°" measurement and we have two matches and two mismatches (that would be 50% mismatches):
A1 - H / B1 - T
A2 - H / B2 - H
A3 - T / B3 - T
A4 - T / B4 - H

Can you think of some way how to get 75% mismatch in last step?
 
  • #27


zonde said:
Can you think of some way how to get 75% mismatch in last step?

You've reduced the experiment to a classical probability. In the actual experiment, although there are two outcomes, the probability of getting one particular result is skewed based on the rotation of the crystal, right? Because which path the light takes is a quantum probability.

If Bob doesn't move his dial, Alice puts hers to 60°, and they get 50%, that'd be wierd. But I assume that if Alice alone moves her dial then at 60 degres there's still a 75% error rate.

I stand by the argument that expecting a 50% error rate at 60° was illogical in the first place. The graph seems to indicate that beyond 90° the error rate repeats, picking back up until at 180° they are again error free. Has this been experimentally tested?

Following through the originally 60°/50% error rate assumption, the two dials would then only become entirely mismatched at 120°. This means the pattern fits three times into a full rotation of the circle. This is, as far as I'm aware, contrary to the crystal structure and the nature of polarization. Worse, calibration has already demostrated that a 90° tilt of the detector returns a string of 0's and 0° tilt is all 1's. Logic dictates that a full mismatch can only and must occur at an angle of 90°. Therefore the most obvious assumption was that a 50/50 error rate would occur at 45°.

The relationship, while not obvious to begin with as being non-linear, clearly is and is most likely a result of "weighting" in the odds of getting a 0 or 1 at any given angle because the photons passage through the crystal is a quantum event.
 
  • #28


salvestrom said:
You've reduced the experiment to a classical probability. In the actual experiment, although there are two outcomes, the probability of getting one particular result is skewed based on the rotation of the crystal, right? Because which path the light takes is a quantum probability.
In actual experiment you get one outcome half of the time and other outcome half of the time for one analyzer. Effect from rotation of analyzer (polarization beam splitter) you observe only when you compare Alice's and Bob's results pairwise.

I do not understand what do you mean by "quantum probability". Is it supposed to mean something like "magic"?


salvestrom said:
If Bob doesn't move his dial, Alice puts hers to 60°, and they get 50%, that'd be wierd. But I assume that if Alice alone moves her dial then at 60 degres there's still a 75% error rate.

I stand by the argument that expecting a 50% error rate at 60° was illogical in the first place. The graph seems to indicate that beyond 90° the error rate repeats, picking back up until at 180° they are again error free. Has this been experimentally tested?
I like this paper Violation of Bell's inequality under strict Einstein locality conditions

But there are two sides for this question. One side is what you observe in experiment and the other side is how do you explain that. You are talking about the observations side. The model I gave is about the other (explanation) side.

salvestrom said:
Following through the originally 60°/50% error rate assumption, the two dials would then only become entirely mismatched at 120°. This means the pattern fits three times into a full rotation of the circle. This is, as far as I'm aware, contrary to the crystal structure and the nature of polarization. Worse, calibration has already demostrated that a 90° tilt of the detector returns a string of 0's and 0° tilt is all 1's. Logic dictates that a full mismatch can only and must occur at an angle of 90°. Therefore the most obvious assumption was that a 50/50 error rate would occur at 45°.

The relationship, while not obvious to begin with as being non-linear, clearly is and is most likely a result of "weighting" in the odds of getting a 0 or 1 at any given angle because the photons passage through the crystal is a quantum event.

I gave you a model. This model does not represent what we observe in experiment.
The main question is: Why? What is the key difference between model and reality why this model is wrong?
 
  • #29


salvestrom said:
You've reduced the experiment to a classical probability. In the actual experiment, although there are two outcomes, the probability of getting one particular result is skewed based on the rotation of the crystal, right? Because which path the light takes is a quantum probability.

If Bob doesn't move his dial, Alice puts hers to 60°, and they get 50%, that'd be wierd. But I assume that if Alice alone moves her dial then at 60 degres there's still a 75% error rate.

I stand by the argument that expecting a 50% error rate at 60° was illogical in the first place. The graph seems to indicate that beyond 90° the error rate repeats, picking back up until at 180° they are again error free. Has this been experimentally tested?

Following through the originally 60°/50% error rate assumption, the two dials would then only become entirely mismatched at 120°. This means the pattern fits three times into a full rotation of the circle. This is, as far as I'm aware, contrary to the crystal structure and the nature of polarization. Worse, calibration has already demostrated that a 90° tilt of the detector returns a string of 0's and 0° tilt is all 1's. Logic dictates that a full mismatch can only and must occur at an angle of 90°. Therefore the most obvious assumption was that a 50/50 error rate would occur at 45°.

The relationship, while not obvious to begin with as being non-linear, clearly is and is most likely a result of "weighting" in the odds of getting a 0 or 1 at any given angle because the photons passage through the crystal is a quantum event.
OK, let me try one more time. We find that if we turn both detectors to the same angle θ, then either both photons go through or both photons don't go through. From this we conclude that the two photons are making their decisions about whether to go through or not based on a function P(θ) that they agreed to in advance. If a photon encounters a detector at an angle θ, it consults this function: if P(θ)=1 it goes through, and if P(θ)=0 it doesn't go through. Based on this assumption (which we call local realism), we reach the following conclusions:

1. The probability that P(0) ≠ P(30) is 25%
2. The probability that P(-30) ≠ P(30) is 25%

Given these two facts, what is the probability that P(-30)≠P(30)? Clearly, it is less than or equal to 25%+25%=50%. Do you disagree with that?
 
  • #30


zonde said:
I do not understand what do you mean by "quantum probability". Is it supposed to mean something like "magic"?

Quantum mechanic's most unusual features are analogous to long perported magical abilities. I'm not saying they are, just that the next time you want to know what I mean by something: ask me and leave the derisery assumptions beside the keyboard.

What I meant was that although the outcomes can only be 1 of 2 possibilities, each potential outcome is weighted depending on the angle. At 45° each is no more likely than the other. Between 0 and 45 the outcome favours toward 1 and above 45 to 90 the outcome favours toward 0. Non-linearly.

As for everything else you and Lugita have posted I'm still dwelling on it, particularly in view of some self-induced uncertainty about what I think is happening here.

In the meantime, perhaps you could figure out a way to explain this in context of how I keep discussing the experiment, and I shall endeavour to either realize why I'm wrong, and if I'm not, actually answer your own questions to me.

EdiT: One question I still have no answer to: is the relationship between the 1 and 0 for a single detector and a non-entangle photon also sinusoidal?
 
Last edited:
  • #31


salvestrom said:
EdiT: One question I still have no answer to: is the relationship between the 1 and 0 for a single detector and a non-entangle photon also sinusoidal?
Yes, if a photon that is polarized in some direction, for example 0°, then the probability that it will go through a polarization detector oriented at an angle θ is a sinusoidal function of θ
 
  • #32


lugita15 said:
Yes, if a photon that is polarized in some direction, for example 0°, then the probability that it will go through a polarization detector oriented at an angle θ is a sinusoidal function of θ

Firstly, my apologies, I didn't see your post about the alcoholic coin tossers until just before this one.

My primary focus has been on whether this actually proves non-locality. Your reply here indicates that in a purely local experiment at 60° the results are 75% 0's and 25% 1's - accepting that you get all 1's at 0° and all 0's at 90°. Therefore the local experiment doesn't even accord to the proposed locality, which was based on a linear increase in deviation.

This would seem to be the best starting point to explain anything. Can we interpret the above numbers as a 75% deviation from the results at 0°?
 
  • #33


salvestrom said:
Quantum mechanic's most unusual features are analogous to long perported magical abilities. I'm not saying they are, just that the next time you want to know what I mean by something: ask me and leave the derisery assumptions beside the keyboard.

What I meant was that although the outcomes can only be 1 of 2 possibilities, each potential outcome is weighted depending on the angle. At 45° each is no more likely than the other. Between 0 and 45 the outcome favours toward 1 and above 45 to 90 the outcome favours toward 0. Non-linearly.
Ok, so what do you mean with outcome and what do you mean with angle? There are two analyzers, two angles, two outcomes and then there are relative angle between angles of analyzers and "relative outcome" i.e. two outcomes either match (1/1 and 0/0) or they don't (1/0 and 0/1).

salvestrom said:
EdiT: One question I still have no answer to: is the relationship between the 1 and 0 for a single detector and a non-entangle photon also sinusoidal?
For linearly polarized photons it is as lugita says. You can take a look at Malus law.

But relationship between the 1 and 0 for a single detector and a single photon from source that produces entangled photon pairs (we just ignore the other one) is not sinusoidal but linear i.e. (for idealized setup) there is no change in outcome whatever rotation of analyzer.
 
  • #34


salvestrom said:
Firstly, my apologies, I didn't see your post about the alcoholic coin tossers until just before this one.

My primary focus has been on whether this actually proves non-locality. Your reply here indicates that in a purely local experiment at 60° the results are 75% 0's and 25% 1's - accepting that you get all 1's at 0° and all 0's at 90°. Therefore the local experiment doesn't even accord to the proposed locality, which was based on a linear increase in deviation.

This would seem to be the best starting point to explain anything. Can we interpret the above numbers as a 75% deviation from the results at 0°?
Here are the facts you need to understand:

1. If you have an unpolarized photon, and you put it through a detector, it will have a 50-50 chance of going through, regardless of the angle it's oriented at.
2. A local realist would say that the photon doesn't just randomly go through or not go through the detector oriented at an angle θ; he would say that each unpolarized photon has its own function P(θ) which is guiding it's behavior: it goes through if P(θ)=1 and it doesn't go through it P(θ)=0.
3. Unfortunately, for any given unpolarized photon we can only find out one value of P(θ), because after we send it through a detector and it successfully goes through, it will now be polarized in the direction of the detector and it will "forget" the function P(θ).
4. If you have a pair of entangled photons and you put one of them through a detector, it will have a 50-50 chance of going through, regardless of the angle it's oriented at, just like an unpolarized photon.
5. Just as above, the local realist would say that the photon is acting according to some function P(θ) which tells it what to do.
6. If you have a pair of entangled photons and you put both of them through detectors that are turned to the same angle, then they will either both go through or both not go through.
7. Since the local realist does not believe that the two photons can coordinate their behavior by communicating instantaneously, he concludes the reason they're doing the same thing at the same angle is that they're both using the same function P(θ).
8. He is in a better position than he was before, because now he can find out the values of the function P(θ) at two different angles, by putting one photon through one angle and the other photon through a different angle.
9. If the entangled photons are put through detectors 30° apart, they have 25% chance of not matching.
10. The local realist concludes that for any angle θ, the probability that P(θ±30°)≠P(θ) is 25%, or to put it another way the probability that P(θ±30°)=P(θ) is 75%.
11. So 75% of the time, P(-30)=P(0), and 75% of the time P(0)=P(30), so there's no way that P(-30)≠P(30) 75% of the time.
12. Yet when the entangled photons are put through detector 60°, they have a 75% chance of not matching, so the local realist is very confused.

What step do you not agree with?
 
  • #35


The outcome is not 50/50 regardless of angle. During calibration it is firmly established that at 0° all outcomes are 1 and at 90° all outcomes are 0. At 45° all outcomes are 50/50. At 60° the outcome is 75% 0's and 25% 1's.

In a single detector, non-entangled photon experiment the results are sinusoidal. At 60° the deviation from the 0° result is 75%. This is the result of a purely localised experiment.

Introducing Bob and entangled photons allows us to do something special. We can now know the outcome of two settings for Alice at the same time. Bob can be set to show us what Alice would show at any given angle, such as -30° while Alice can be set to show another set of results at 30°. This is a 60° split and will show a 75% deviation, exactly as you get in a localisied experiment. There is nothing non-local implied about this relationship. Alice is effectively showing a deviation from her own potential results, if we could actually record both angles at once purely using her detector.

The only spookyness is in the fact both detectors return pricesly the same results at the same angular setting which could potentially be explained at the point of entanglement.

My argument is solely that non-locality cannot be in inferred because of the 75% deviation over 60°, because that deviation occurs in a purely localisied version of the experiment as well.

Edit: at this point it's safe to say the step I don't agree with is the first one. It is contrary to the facts established during the calibration.
 
Last edited:

Similar threads

  • Quantum Interpretations and Foundations
Replies
31
Views
1K
Replies
15
Views
2K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
2K
  • Quantum Physics
2
Replies
63
Views
8K
Replies
11
Views
2K
  • Quantum Interpretations and Foundations
4
Replies
138
Views
5K
Replies
22
Views
4K
  • Quantum Interpretations and Foundations
2
Replies
45
Views
3K
  • Quantum Physics
Replies
25
Views
3K
Replies
14
Views
4K
Back
Top