Long-distance correlation, information, and faster-than-light interaction

Click For Summary
The discussion revolves around the nature of long-distance correlations in quantum mechanics, particularly in the context of Bell's theorem and faster-than-light (FTL) communication. Participants explore the implications of photon polarization and the randomness inherent in quantum measurements, debating whether FTL influences are necessary to explain observed correlations. The conversation emphasizes that while correlations exist, attributing them to an influence between distant measurements is problematic and may misrepresent the underlying physics. There is a call for further investigation into the randomness of photon behavior without resorting to FTL theories, suggesting that understanding these correlations requires a reevaluation of how information is conceptualized in quantum experiments. The thread ultimately highlights the complexities of interpreting quantum entanglement and the limitations of classical models in explaining quantum phenomena.
  • #61
ddd123 said:
To me the white dwarf example is absolutely the same, so I kept using the EPR one. You remove an atom from one side, suddenly, placing the atom on the other side becomes possible. It's exactly the same as in my picture above, with an atom placer-remover instead of the polarizer.
Yes, I agree we have the same issue in that example, I just think it is an example where it is even more clear that the "propagating influence" philosophy is not a good one. The reason I say it is "not good" is that it leads us to imagine what we know will happen to be "weird." That's what I mean by a "not good' philosophy. There are not individual electrons in a white dwarf, so they don't have locations, they are indistinguishable from each other. What has a location is the measurements that we do on a sea of electrons.
 
Physics news on Phys.org
  • #62
Even if you consider the set of measurements as a mapping's input, the elements of this set correspond to spacetime events: so a probability distribution output, as observed in one spacetime region, depends on the input in another instantaneously.

Ken G said:
What has a location is the measurements that we do on electrons.

Right, that's why it doesn't change anything for me. Even without realism you still have nonlocality without relaxing other assumptions (like forward causality).
 
  • #63
ddd123 said:
Even if you consider the set of measurements as a mapping's input, the elements of this set correspond to spacetime events: so a probability distribution output, as observed in one spacetime region, depends on the input in another instantaneously.
There's nothing strange about that, it happens with a pair of socks. But what is significant is that for the socks, the nonlocal change in the probability distribution only appears to be nonlocal if you think the probability distribution exists at the location of the measurement. If you recognize that the location of a probability distribution is in the brain of the person using it, again there is never any nonlocality. My point is it is very easy to make nonlocality go away, all you need to reject is nonlocal realism, that combination that allows us to imagine a set of independent parts that have probability distributions carried around with the parts, and subject to subluminally propagating influences. That picture doesn't work well at all for either white dwarfs or EPR experiments, because the "propagating influences" have to do things like propagate back in time. So I say, jettison that picture, and generalize the meaning of a system to be nothing more than we can observe it to be: a mapping from a preparation to a set of outcomes associated with each hypothetical set of measurements we could do. Nothing more, and nothing less. If you just say that, there's never any nonlocality. Personally, I don't think it's "weird" to stick with what we actually need, especially when we get no nonlocality when we do that.
Right, that's why it doesn't change anything for me. Even without realism you still have nonlocality without relaxing other assumptions (like forward causality).
No, you never get nonlocality without realism. That's why the Bell theorem is said to rule out local realism, it is not said to rule out locality.
 
Last edited:
  • #64
Ah I think I'm understanding you more now. But for me it's not that the probability distribution exists at the spacetime location. What exist in the spacetime region is the measurement results. The probability distribution is asymptotically reconstructed with a large number of events. The hypothetical concerning measurement choices here involves a different observed result there. Probability distribution is a conclusion, the physical events are laid out and the hypotheticals involved display an instantaneous "if here then there".

BTW I wasn't talking about Bell's theorem.
 
  • #65
Ken G said:
That's why the Bell theorem is said to rule out nonlocal realism
You mean it rules out local realism? It proves that no local variables can even possibly meet outcomes observed.
 
  • #66
ddd123 said:
Ah I think I'm understanding you more now. But for me it's not that the probability distribution exists at the spacetime location. What exist in the spacetime region is the measurement results.
That's fine, you can have the measurement results exist at their location, and still keep locality. In fact, that's how you keep locality-- measurements have the location of the measurement, predictive probabilities have the location of the brain using them. Stick to that, and add nothing more that you don't need, and you get no nonlocality in either white dwarfs or EPR experiments.
The hypothetical concerning measurement choices here involves a different observed result there. Probability distribution is a conclusion, the physical events are laid out and the hypotheticals involved display an instantaneous "if here then there".
There are two types of probability distributions. One is an expectation in the mind of the physicist, which is to be checked, the other is a distribution of outcomes, which is not actually a probability distribution but it is what the probability distribution is to be checked against, so in the sense of comparing apples to apples, we can call that a probability distribution as well. The point is, if you never go beyond asserting that the former has the location of a brain and the time the brain is using it, and the latter has the location of a measuring apparatus and the time of the measurement, then you never need any nonlocality. The need for locality doesn't come from that, it comes from the claim that there exists a unique probability distribution for any hypothetical but possible measurement at any place and time, and that unique distribution is independent of everything going on outside its past light cone. That's all you need to jettison to restore the proper locality of everything-- get rid of the "propagating influences" requirement.
BTW I wasn't talking about Bell's theorem.
We can just as easily talk about white dwarfs, everything I said above applies there also.
 
  • #67
But, while the hypothesized probability distribution exists only abstractly, the measurement choices and results are real and localized. And the observed correlation frequencies do change instantaneously on one side when changing the measurement choices on the other side.
 
  • #68
ddd123 said:
But, while the hypothesized probability distribution exists only abstractly, the measurement choices and results are real and localized. And the observed correlation frequencies do change instantaneously on one side when changing the measurement choices on the other side.
No-- when a system is viewed as a mapping between a preparation and any set of possible measurements, that automatically includes all their correlations. They are all set by the preparation, so the correlations never "change". What is a change in a correlation?
 
  • #69
Is there an article/book where this is looked at in detail? Or, does this position you're explaining have a name?
 
  • #70
Ken G said:
What is a change in a correlation?
Like the delayed choice quantum eraser?
 
  • #71
ddd123 said:
Is there an article/book where this is looked at in detail? Or, does this position you're explaining have a name?
I don't know if there is a name for it, or if anyone else has said it first. I only know what I can show: you don't get Bell's theorem if you don't adopt local realism. That means any approach that rejects the assumption that a system comes compete with a set of unique probabilities for each set of observations that could be done on it, including the assumption that those probabilities are independent of each other, suffices to make that theorem irrelevant. For example, see http://drchinese.com/David/Bell_Theorem_Easy_Math.htm, where the assumption I refer to is called "local hidden variables."

So in other words, we are talking about our options for not adopting local hidden variables. What I am critiquing is the philosophy that says if we don't have local hidden variables, we must have nonlocal hidden variables. I say drop the whole idea that we have hidden variables, meaning variables "hidden in the parts", where those variables determine unique probabilities for any observations on the parts, independent of any observations anywhere else, and can only be changed by "propagating influences." Instead, just say that a system is a preparation together with the mapping it produces, where that mapping is a map between any set of hypothetical observations you can name, and the associated set of probabilities. That's what quantum mechanics actually does, so my approach is a kind of "minimalist" philosophy applied to scientific interpretation.
 
  • #72
The fact that Bell's theorem only rules out (forward causal, one world, non-superdeterministic, etc...) local realism doesn't mean local non-realism with all the other "normal" assumptions is possible. You still have to prove that it is possible. Bell's is a no-go theorem, not a go theorem.

Also, earlier you said:

Ken G said:
But a more important issue is, I never said the theory doesn't violate locality, of course it does that. I said it doesn't need to violate it by application of a concept of "FTL or retrocausal influences."

But now you seem to be saying precisely that.
 
  • #73
jerromyjon said:
Like the delayed choice quantum eraser?
Explain how a delayed choice quantum eraser "changes" a correlation. The correlations are specified by the preparation. What correlations are is two-point comparisons among measurements. I'm saying that when you have the preparation, you have a concept of a "system", which is just a machine for connecting any set of measurements you can do with all the probability outcomes of those measurements, including their correlations. If you do different measurements, you are invoking different correlations from that set. Nothing "changes."
 
  • #74
ddd123 said:
The fact that Bell's theorem only rules out (forward causal, one world, non-superdeterministic, etc...) local realism doesn't mean local non-realism with all the other "normal" assumptions is possible. You still have to prove that it is possible. Bell's is a no-go theorem, not a go theorem.
True, but what I'm talking about is clearly a "go" theorem-- it is quantum mechanics. Are you saying quantum mechanics does not allow us to take a preparation, and provide a mapping between any set of measurements, and their probability distribution including all the two-point correlations? Of course that's just what quantum mechanics let's us do. Nothing I've said goes beyond that in any way-- that's the whole point, I am simply not requiring that a "system" acquires any elements at all that quantum mechanics does not empower us to equip our concept of a "system" with.
But now you seem to be saying precisely that.
Yes, I actually edited that statement shortly after saying it, because I realized it wasn't quite what I meant to say. Sorry for creating confusion. What I'm saying is, sometimes we see claims that quantum mechanics forces us to reject locality, but that's only if we insist on clinging to the hidden variables concept associated with what happens when you combine realism with a concept that a system is made of "real parts." The formally true statement, I'm sure you'll agree, is what all we know is we cannot have "local realism." I'm saying locality is a more general and valuable physical principle than the "made of real parts" version of realism, so when we recognize that we can't have local realism, we should keep the locality but jettison the version of realism that says a system is a sum of independent real parts. That picture is not very good with indistinguishability, and it is not very good with entanglement either. So junk it. You don't lose any quantum mechanics by junking that philosophy-- quantum mechanics doesn't need it. This is clear: all quantum mechanics does is tell you how to take a preparation and figure out how to associate a set of outcomes with a set of measurements, including not only probability distributions but also two-point correlations (and higher).
 
  • #75
Well, personally, I'm not for nonlocal realism because I don't like the LET assumption or non-forward causality. Basically my view is nonlocal nonrealist. I just can't grasp your argument: I don't know about that. Surely I am intellectually limited because otherwise I could acknowledge you or rebuttal further. If I could see a fully laid-out exposition on a paper, or a completely different explanation (maybe written by someone else, since your style doesn't seem to be able to convey the concept to me), that'd help.
 
  • #76
ddd123 said:
Well, personally, I'm not for nonlocal realism because I don't like the LET assumption or non-forward causality. Basically my view is nonlocal nonrealist.
What would you call an approach that says this:
A system is a physical setup that is established by a preparation and has the function of associating any hypothetical measurements we can describe with a set of probability distributions, including two-point correlations (and higher). Quantum mechanics provides us with the instructions for connecting the preparations to the probabilities.

To me, that description is all we need to use the system concept in science. I see that it does not necessarily invoke realism in the sense of independent parts owning their own unique probabilities that can only be changed by "influences", we only invoke that picture when it is actually working for us (which is, as I said, when the influences are subluminal and can themselves be observed by intercepting them), so that gets called "nonrealist." But it's the "realist" approach that seems unrealistic to me. In any event, what I just said never encounters any issue with nonlocality, either in terms of Bell's theorem in EPR systems, or in terms of fermionic indistinguishability in white dwarfs.
 
Last edited:
  • #77
Ken G said:
What would you call an approach that says this:
A system is a physical setup that is established by a preparation and has the function of associating any hypothetical measurements we can describe with a set of probability distributions, including two-point correlations (and higher). Quantum mechanics provides us with the instructions for connecting the preparations to the probabilities.

I don't know if that's local.

In any event, what I just said never encounters any issue with nonlocality, either in terms of Bell's theorem in EPR systems, or in terms of fermionic indistinguishability in white dwarfs.

It can still be nonlocal by hidden assumption.
 
  • #78
ddd123 said:
Well, personally, I'm not for nonlocal realism because I don't like the LET assumption or non-forward causality. Basically my view is nonlocal nonrealist. I just can't grasp your argument: I don't know about that. Surely I am intellectually limited because otherwise I could acknowledge you or rebuttal further. If I could see a fully laid-out exposition on a paper, or a completely different explanation (maybe written by someone else, since your style doesn't seem to be able to convey the concept to me), that'd help.
Chapter 6 of Quantum Theory Concepts and Methods by Asher Peres is well argued. http://www.fisica.net/quantica/Peres%20-%20Quantum%20Theory%20Concepts%20and%20Methods.pdf.
 
  • #79
ddd123 said:
I don't know if that's local.

It can still be nonlocal by hidden assumption.
The system could be called "nonlocal" in the sense that it is holistic, like a field can be regarded as holistic. But it's not the system that gets called nonlocal in a lot of places (including on this forum, which I was reacting to initially), it is the influences that do. That's what I object to-- if you regard the system I'm talking about as a "nonlocal system", I don't have any objection. I object to the phrase "nonlocal influences", because that implies we have a propagation of an influence that moves faster than light and even back in time. My approach has none of that.
 
  • #80
Ken G said:
The system could be called "nonlocal" in the sense that it is holistic, like a field can be regarded as holistic. But it's not the system that gets called nonlocal, it is the influences that do. That's what I object to-- if you regard the system I'm talking about as a "nonlocal system", I don't have any objection. I object to the phrase "nonlocal influences", because that implies we have a propagation of an influence that moves faster than light and even back in time. My approach has none of that.

What would you say about the book Mentz linked? Is that your position?
 
  • #81
ddd123 said:
What would you say about the book Mentz linked? Is that your position?
Peres argues that in EPR it is impossible to define an observer independent concept of 'before' and 'after.' Using this concept along with particle oriented ideas leads to non-understanding. There's a lot more than that in the chapter.
 
  • #82
ddd123 said:
What would you say about the book Mentz linked? Is that your position?
I haven't read the book, but from a few short snapshots, I'd say the author is quite careful not to add anything more than is necessary, so I think we are largely in agreement. For example, he says:
"A quantum system is a useful abstraction, which frequently appears in the literature, but does not really exist in nature. In general, a quantum system is defined by an equivalence class of preparations. (Recall that “preparations” and “tests” are the primitive notions of quantum theory. Their meaning is the set of instructions to be followed by an experimenter.)"

So there is no requirement that the system be comprised of "independent parts", that is no kind of fundamental attribute of a system in his approach, and that's what I am advocating as well. He goes on:
"We can then define a state as follows: A state is characterized by the probabilities of the various outcomes of every conceivable test."
So yes, that's what I'm talking about. It's the set of tests that matter, not the set of "real parts". The concept of a part is quite optional, it only works in some situations (though granted, quite a lot).
 
  • #83
Then I'll try to work through that chapter. Thanks both.
 
  • #84
Ken G said:
Explain how a delayed choice quantum eraser "changes" a correlation.
After a delay of several ns a random event occurs that effects the results of an entangled partner that already hit another screen and was detected to have which path information "preserved" showed a which path pattern. When the path detection was erased the interference pattern was observed.
Ken G said:
The correlations are specified by the preparation.
The correlations occur only when a random obfuscation occurs later.
 
  • #85
jerromyjon said:
After a delay of several ns a random event occurs that effects the results of an entangled partner that already hit another screen and was detected to have which path information "preserved" showed a which path pattern. When the path detection was erased the interference pattern was observed.

The correlations occur only when a random obfuscation occurs later.
There is no need to assert when the correlations occur, if we simply say that the system comprises of the preparation and all the probabilities associated with all the possible measurements. There is no "change", there is only a decision about what measurements to do. It makes no difference when those decisions were made, there are no changes-- unless we insist on believing in the concept of independent probabilities for the "parts." That's the same idea that gives us retrocausality, I say stick to only what we need, and both the idea that there are "influences", and also the idea that "correlations change", go away immediately.
 
  • #86
ddd123 said:
In the latter case, how do you consider it with respect to different inertial frames? In a frame where Bob measures first, it's Bob influencing FTL and Alice's side is being influenced, and vice-versa?
Yes, and the cute thing about QM is that it's all the same how you look at it. It's some initially random common property that gets changed to match/oppose the setting of the "first" detector, then at the "second" detector it gets changed again for the second particle with probability depending on the first value... Statistically it works out to be equivalent in either order.

If you insist on using deterministic simulation with a set value for the property even before the measurements, you do have different evolution depending on how you look at it, but still both possibilities are ok, so throw a coin or otherwise "predetermine" which one you want to use the same way you predetermined the initial hidden property. In the end it just doesn't matter.
 
  • #87
georgir said:
If you insist on using deterministic simulation with a set value for the property even before the measurements

Interesting you say deterministic. What's the relationship between hidden variables / realism (I reckon that they're the same) and determinism? Which one implies the other?
 
  • #88
ddd123 said:
Interesting you say deterministic. What's the relationship between hidden variables / realism (I reckon that they're the same) and determinism? Which one implies the other?
For any randomness in a model you can just as easily substitute a supposedly deterministic mechanism dependant on unknown "hidden variables". If you should is another question entirely - if they stay hidden forever, Occam's razor may have something to say about them.
[edit: got confused that we're in the other thread from which we got split earlier, and posted some nonsense bits originally. now removed]
 
  • #89
Ken G said:
There is no need to assert when the correlations occur, if we simply say that the system comprises of the preparation and all the probabilities associated with all the possible measurements.
In this case there is obvious reason and sufficient data recorded to determine when interference occurs and when classical trajectories occur, based on the geometry of the preparation alone.
Ken G said:
It makes no difference when those decisions were made, there are no changes-- unless we insist on believing in the concept of independent probabilities for the "parts."
How do you relate a detection at D0 which occurs first with particle or wave behavior depending on the entangled partner being detected later either in one slit or both?
 
  • #90
jerromyjon said:
In this case there is obvious reason and sufficient data recorded to determine when interference occurs and when classical trajectories occur, based on the geometry of the preparation alone.
Then please say when the correlations occurred, and how that means any correlation ever "changed" in a delayed choice experiment. It seems to me you are simply applying the philosophy of local realism plus nonlocal influences, but no data forces that interpretation upon us. My interpretation is that correlations do not exist until sufficient measurements have occurred that could demonstrate those correlations. Using that simple rule, they never "change" in any experiment, unless you do a different experiment, in which case it makes perfect sense it could produce different correlations. All we need to understand is why a different experiment produces different correlations, and quantum mechanics already does that for us, in a way that is perfectly consistent so does not "change."
How do you relate a detection at D0 which occurs first with particle or wave behavior depending on the entangled partner being detected later either in one slit or both?
It makes no difference to me at all when the detections occur. All physics does is connect a set of measurements to their expected probability distributions and correlations, where you first have to specify both the preparation and what measurements are being made. I don't care at all when the measurements get made, that plays no role in the physics of entanglement experiments, and affects our expectations for those outcomes not at all. That' simply a fact about these experiments. So all that is happening in delayed choice is, we go in with an expectation that the times when we do the measurements should matter, but we find out they don't, so we retrofit this idea that something had to "change" to get that to be true. But the blame is all on all our initial expectation that when we did the measurements should matter, even though we have no mechanism in mind by which it possibly could matter. We should simply recognize our initial expectation was wrong, and drop it. It's all a holdover from local realism, if we didn't go in with that philosophy in the first place, we wouldn't need to retrofit it.
 
Last edited:

Similar threads

  • · Replies 24 ·
Replies
24
Views
3K
Replies
58
Views
4K
Replies
2
Views
2K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 40 ·
2
Replies
40
Views
5K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 225 ·
8
Replies
225
Views
14K
  • · Replies 15 ·
Replies
15
Views
2K
  • · Replies 22 ·
Replies
22
Views
5K
  • · Replies 8 ·
Replies
8
Views
731