Long-distance correlation, information, and faster-than-light interaction

  • Thread starter edguy99
  • Start date
  • #51
zonde
Gold Member
2,941
213
Another approach is to simply say that the measurements done on the system are part of what determines the probability distributions of those measurements. This is, after all, just what we see with entangled systems, it was only ever us that said these systems had to comprise of independent probability distributions. We know they do not, so why force "influences" down their throat, when we can just say what we see to be true: the probabilities depend on the measurements we choose because part of what we mean by the system is how the outcomes of chosen measurements interact with the preparation. That's really all physics ever concerns itself with, the idea that you need influences to alter the independent probabilities is merely a holdover from other contexts where that actually works.

First of all, entanglement is never about perfect correlations, because you don't get Bell violations with perfect correlations. But a more important issue is, I never said the theory doesn't violate locality, of course it does that. I said it doesn't need to violate it by application of a concept of "FTL or retrocausal influences." One simply says that to talk meaningfully about a system, we need more than its preparation, we need to know what attributes of that system are being established by measurement, and the probabilities it will exhibit depend on the way the measurements establish those attributes. Again, outcomes are an interaction between preparation and measurement choices, no influences needed, FTL or otherwise. I'd say that same thing even if the speed of light was infinite-- there's no magical "influence" there either way, its causality violations are just a good clue we made a bad choice of language.
Well yes, there is another explanation that works. It's called superdeterminism. Unfortunately it's non-scientific. If that is what you mean (as it seems to me) then it's no real alternative to FTL.

Btw As you refer to FTL and retrocausal influences together do you consider them equally unacceptable?
 
  • #52
edguy99
Gold Member
450
29
... claims to match QM and is "local realistic" is going to be a violation of PhysicsForums guidelines regarding speculation and unpublished works (ie what is an acceptable source)...
I'm sorry, I will try to be more careful. I actually had not even checked if it did match QM till you mentioned it. I was pleasantly surprised to hear you say it did.

I have been concentrated on the process of constructing the matrices A, B, E, and F. The only reason to run the animation is to generate the matrices, but it is nice to highlight the time dependent things that are happening, ie 1/ random photon shot, 2/ Bob/Alice detectors randomly set, 3/ Detection recorded in the matrices. Repeat until done.

I found the E matrix interesting as I had not realized the importance of the two 0's in the E matrix in the Bell test until I did the animation.

Also, I have converted the animation to do only "statistical" photons (ie. it was stage 1,2,3 was classical, 4 was statistical. Now only 3 stages all statistical). Now you can see some numbers showing up in matrix E when doing the single shot animation.

BTW, It would be great if you were interested to try the functions in post #44 and generate a few sets of matrices to have a look at :)
 
  • #53
Ken G
Gold Member
4,438
333
Well yes, there is another explanation that works. It's called superdeterminism.
No, what I just described is not superdeterminism. What I just described is pure science. Let me repeat it to prove that.

In scientific thinking, any "system" can be regarded as a kind of machine for connecting a preparation with a set of outcomes for any set of measurements, where what I mean by "outcomes" can include probability distributions and not just definite predictions. That's it, that's what I said. There is no scientific requirement whatsoever that the "system" be able to be analyzed in terms of parts, it just happens that this is often possible. There is no scientific requirement whatsoever that the probability distributions be independent of each other, in the sense that the preparation must determine what those distributions are, independent of the actual set of measurements that are used-- it just so happens that this is often possible. And finally, there is no scientific requirement whatsoever for us to say that whenever we find correlations between measured outcomes that depend on what measurements are done, there has to be "influences" that propagate across the system, though again, there are situations where that is a useful way to think of what is happening (generally situations where the "influences" are slower than light and could in principle be intercepted, i.e., when the influences are themselves part of the measurable system). In all cases, what is happening here is simply that models that have worked for us in various contexts, models like "parts" and "independence" and "influences", are simply being mistaken for scientific requirements. But science is not defined by the model, it is defined by the method.
Btw As you refer to FTL and retrocausal influences together do you consider them equally unacceptable?
Yes, I see no particularly important distinction between crossing one light-cone boundary versus crossing two of them. Either way, it's not the way the time-honored model we call an "influence" works. My main objection to that language is it demonstrably forces people to conclude that a behavior they are sure will happen is "weird." That's the signpost of a failing picture. What is weird is supposed to be what we do not expect, rather than what we do expect.
 
Last edited:
  • #54
481
55
Let me repeat it to prove that.
I really can't understand you Ken G. Could you address my question directly?

I don't even know if I understood your premises here but this is what it seems to me that you're saying: the resulting probability distribution depends on the holistic system (which includes the measurement instruments). I've made a picture:

uhDH7Ub.jpg


It doesn't matter if the polarizers are part of the holistic system: sure, rotating one changes the outcome for probability distribution, but also at the other side. A choice here impacts there, because the system is extended in space. You can say the whole system changes nonlocally, so to avoid using the term FTL. But it's only a nominal difference. It may be verbally nicer.

Am I missing something?
 
  • #55
zonde
Gold Member
2,941
213
There is no scientific requirement whatsoever that the "system" be able to be analyzed in terms of parts, it just happens that this is often possible.
If we can make observations in terms of parts then of course we can analyze these observations.

There is no scientific requirement whatsoever that the probability distributions be independent of each other, in the sense that the preparation must determine what those distributions are, independent of the actual set of measurements that are used-- it just so happens that this is often possible.
Actually no superdeterminism assumptions requires that preparation is independent from reasonably randomized measurement settings.

And finally, there is no scientific requirement whatsoever for us to say that whenever we find correlations between measured outcomes that depend on what measurements are done, there has to be "influences" that propagate across the system, though again, there are situations where that is a useful way to think of what is happening (generally situations where the "influences" are slower than light and could in principle be intercepted, i.e., when the influences are themselves part of the measurable system).
There is such requirement. That requirement is called falsifiability. That means there should be valid observation that could indicate FTL "influence" and falsify "no FTL" hypothesis even if we can't create that FTL phenomena artificially.

In all cases, what is happening here is simply that models that have worked for us in various contexts, models like "parts" and "independence" and "influences", are simply being mistaken for scientific requirements. But science is not defined by the model, it is defined by the method.
Science is as well defined by requirements that scientific models have to fulfill.

Yes, I see no particularly important distinction between crossing one light-cone boundary versus crossing two of them. Either way, it's not the way the time-honored model we call an "influence" works. My main objection to that language is it demonstrably forces people to conclude that a behavior they are sure will happen is "weird." That's the signpost of a failing picture. What is weird is supposed to be what we do not expect, rather than what we do expect.
Are your objections about wording or what?

There are two theories and what we expect from one theory contradicts what we consider possible in the other theory. If you say this signpost of a failing picture ... well I agree.
 
  • #56
Ken G
Gold Member
4,438
333
You can say the whole system changes nonlocally, so to avoid using the term FTL. But it's only a nominal difference. It may be verbally nicer.

Am I missing something?
Yes. The concept of "FTL" involves the concept "faster." That involves the concept of speed, which involves the concept of movement or propagation. Also, when people use the term FTL, as in this thread, they generally couple it to the term "influence." So this is what I am saying has no need to be present in entanglement: the concept of a propagating influence. That's why I said it's not the "FTL" itself I object to, if all you mean by that is once you've embraced realism, you are stuck with nonlocality-- that is what we know to be true. But we don't have to leave it at that, if we still have something that seems "weird"-- we have something else too. We have the repudiation of the idea that a system is made of parts that connect via "propagating influences". That latter notion is simply a view that often works for us, so we mistake it for some kind of universal truth, or worse, a universal requirement for scientific investigation.

So when you say the system "changes nonlocally" when a measurement choice is made, even that implies a hidden assumption. You see that the measurement is applied at one place and time, and the system changes everywhere. But that isn't supported by the data either. The most general meaning of "a system" is a mapping between a preparation and a set of measurement probability distributions-- after all, that's all that is involved in empirical science. Notice that if you define the "system" that way, it never "changes" nonlocally! To get rid of "nonlocality", one merely needs to define a system the way I just did, and poof, it's gone. The nonlocality comes from requiring that the system must exist independently of the measurements that establish its attributes, that's why it is actually "nonlocal realism," i.e., the combination of locality and realism, that is ruled out-- not either one individually. Personally, I find it much more scientific to define a system the way I just did, but this does not get called "realism" because the measurements are included in the meaning of the system.
 
  • #57
481
55
I don't understand the definition. One thing is not slicing the system into parts in the sense that they're not taken to be independent; another is ignoring spatiotemporal extension and events. Are you giving up the notion of spacetime events? What do you substitute to that?
 
  • #58
Ken G
Gold Member
4,438
333
If we can make observations in terms of parts then of course we can analyze these observations.
What do you mean by "observations in terms of parts?" What we have are observations at various places and times, that's it-- that's not parts. The "parts" are your personal philosophy here, nothing more. Now, it's certainly not a rare philosophy, and the "parts" model certainly works very well for us in almost all situations. But not all, that's the point. Hence, we should question that personal philosophy about observations of systems. It is the observations that have a place and time, not the "parts of the system." That's why I brought up white dwarf stars, they make this point really clear, when you have something the size of a planet that cannot be analyzed in terms of "observations on parts" because the "parts" are indistinguishable from each other.

Actually no superdeterminism assumptions requires that preparation is independent from reasonably randomized measurement settings.
I haven't said anything about superdeterminism other than that I am not talking about superdeterminism, nor have you shown that what I am saying is tantamount to superdeterminism.
There is such requirement. That requirement is called falsifiability.
Then you just vioalated falsifiability with your first sentence above. What is falsifiable about the claim that observations at a place and time are "observations in terms of parts", if entanglement is not a falsification of that claim, and indistinguishability in white dwarfs is not a falsification of that claim? Anyone who thinks observations are "observations in terms of parts" (which I admit, is pretty much everyone before they learn quantum mechanics) is going to find both entanglement and fermionic indistinguishability "weird." But weirdness is not falsification, it is more like repudiation.

That means there should be valid observation that could indicate FTL "influence" and falsify "no FTL" hypothesis even if we can't create that FTL phenomena artificially.
Are you saying there is such a valid observation? If not, it means we are deciding between personal philosophies, and I've never claimed otherwise.
There are two theories and what we expect from one theory contradicts what we consider possible in the other theory. If you say this signpost of a failing picture ... well I agree.
I don't see what two contradicting theories you have in mind here. If anyone had been talking about slower-than-light influences that can be intercepted and studied as part of the system, then we'd be talking about a theory that makes different predictions than quantum mechanics, but no one is talking about a theory like that.
 
  • #59
481
55
To me the white dwarf example is absolutely the same, so I kept using the EPR one. You remove an atom from one side, suddenly, placing the atom on the other side becomes possible. It's exactly the same as in my picture above, with an atom placer-remover instead of the polarizer.
 
  • #60
Ken G
Gold Member
4,438
333
I don't understand the definition. One thing is not slicing the system into parts in the sense that they're not taken to be independent; another is ignoring spatiotemporal extension and events. Are you giving up the notion of spacetime events?
I have in no way given up the notion of spacetime events, I have merely noticed what is in evidence: the spacetime events are the observations, not the parts of the system. The parts of the system needs to mean nothing more than what the system is composed of, which is all part of the preparation. If you do an observation at a given place and time, it is certainly important that the observation occurred at a given place and time, and you will never need to use any other concept of "parts." Once the measurement is made, you can regard that as a new preparation, a new system, and a new mapping from that total preparation to new outcomes. Those new probabilities can change instantly because your information has changed instantly, that's "Bertlmann's socks." Nonlocality only appears when you combine with realism, i.e., combine with the picture that there exists a unique probability distribution that goes with each measurement independently of all other measurements outside the past light cone of that one. Local realism says you might not know what that unique distribution is, and any new information you get that constrains the preparation could cause you to reassess that unique probability, but you hold that it still exists unchanged. But if you don't think of a system as a list of unique probabilities, but rather a machine for connecting a preparation with a set of probabilities associated with any set of measurements you could in principle do, then you have no problems with nonlocality.
 
  • #61
Ken G
Gold Member
4,438
333
To me the white dwarf example is absolutely the same, so I kept using the EPR one. You remove an atom from one side, suddenly, placing the atom on the other side becomes possible. It's exactly the same as in my picture above, with an atom placer-remover instead of the polarizer.
Yes, I agree we have the same issue in that example, I just think it is an example where it is even more clear that the "propagating influence" philosophy is not a good one. The reason I say it is "not good" is that it leads us to imagine what we know will happen to be "weird." That's what I mean by a "not good' philosophy. There are not individual electrons in a white dwarf, so they don't have locations, they are indistinguishable from each other. What has a location is the measurements that we do on a sea of electrons.
 
  • #62
481
55
Even if you consider the set of measurements as a mapping's input, the elements of this set correspond to spacetime events: so a probability distribution output, as observed in one spacetime region, depends on the input in another instantaneously.

What has a location is the measurements that we do on electrons.
Right, that's why it doesn't change anything for me. Even without realism you still have nonlocality without relaxing other assumptions (like forward causality).
 
  • #63
Ken G
Gold Member
4,438
333
Even if you consider the set of measurements as a mapping's input, the elements of this set correspond to spacetime events: so a probability distribution output, as observed in one spacetime region, depends on the input in another instantaneously.
There's nothing strange about that, it happens with a pair of socks. But what is significant is that for the socks, the nonlocal change in the probability distribution only appears to be nonlocal if you think the probability distribution exists at the location of the measurement. If you recognize that the location of a probability distribution is in the brain of the person using it, again there is never any nonlocality. My point is it is very easy to make nonlocality go away, all you need to reject is nonlocal realism, that combination that allows us to imagine a set of independent parts that have probability distributions carried around with the parts, and subject to subluminally propagating influences. That picture doesn't work well at all for either white dwarfs or EPR experiments, because the "propagating influences" have to do things like propagate back in time. So I say, jettison that picture, and generalize the meaning of a system to be nothing more than we can observe it to be: a mapping from a preparation to a set of outcomes associated with each hypothetical set of measurements we could do. Nothing more, and nothing less. If you just say that, there's never any nonlocality. Personally, I don't think it's "weird" to stick with what we actually need, especially when we get no nonlocality when we do that.
Right, that's why it doesn't change anything for me. Even without realism you still have nonlocality without relaxing other assumptions (like forward causality).
No, you never get nonlocality without realism. That's why the Bell theorem is said to rule out local realism, it is not said to rule out locality.
 
Last edited:
  • #64
481
55
Ah I think I'm understanding you more now. But for me it's not that the probability distribution exists at the spacetime location. What exist in the spacetime region is the measurement results. The probability distribution is asymptotically reconstructed with a large number of events. The hypothetical concerning measurement choices here involves a different observed result there. Probability distribution is a conclusion, the physical events are laid out and the hypotheticals involved display an instantaneous "if here then there".

BTW I wasn't talking about Bell's theorem.
 
  • #65
1,241
189
That's why the Bell theorem is said to rule out nonlocal realism
You mean it rules out local realism? It proves that no local variables can even possibly meet outcomes observed.
 
  • #66
Ken G
Gold Member
4,438
333
Ah I think I'm understanding you more now. But for me it's not that the probability distribution exists at the spacetime location. What exist in the spacetime region is the measurement results.
That's fine, you can have the measurement results exist at their location, and still keep locality. In fact, that's how you keep locality-- measurements have the location of the measurement, predictive probabilities have the location of the brain using them. Stick to that, and add nothing more that you don't need, and you get no nonlocality in either white dwarfs or EPR experiments.
The hypothetical concerning measurement choices here involves a different observed result there. Probability distribution is a conclusion, the physical events are laid out and the hypotheticals involved display an instantaneous "if here then there".
There are two types of probability distributions. One is an expectation in the mind of the physicist, which is to be checked, the other is a distribution of outcomes, which is not actually a probability distribution but it is what the probability distribution is to be checked against, so in the sense of comparing apples to apples, we can call that a probability distribution as well. The point is, if you never go beyond asserting that the former has the location of a brain and the time the brain is using it, and the latter has the location of a measuring apparatus and the time of the measurement, then you never need any nonlocality. The need for locality doesn't come from that, it comes from the claim that there exists a unique probability distribution for any hypothetical but possible measurement at any place and time, and that unique distribution is independent of everything going on outside its past light cone. That's all you need to jettison to restore the proper locality of everything-- get rid of the "propagating influences" requirement.
BTW I wasn't talking about Bell's theorem.
We can just as easily talk about white dwarfs, everything I said above applies there also.
 
  • #67
481
55
But, while the hypothesized probability distribution exists only abstractly, the measurement choices and results are real and localized. And the observed correlation frequencies do change instantaneously on one side when changing the measurement choices on the other side.
 
  • #68
Ken G
Gold Member
4,438
333
But, while the hypothesized probability distribution exists only abstractly, the measurement choices and results are real and localized. And the observed correlation frequencies do change instantaneously on one side when changing the measurement choices on the other side.
No-- when a system is viewed as a mapping between a preparation and any set of possible measurements, that automatically includes all their correlations. They are all set by the preparation, so the correlations never "change". What is a change in a correlation?
 
  • #69
481
55
Is there an article/book where this is looked at in detail? Or, does this position you're explaining have a name?
 
  • #70
1,241
189
What is a change in a correlation?
Like the delayed choice quantum eraser?
 
  • #71
Ken G
Gold Member
4,438
333
Is there an article/book where this is looked at in detail? Or, does this position you're explaining have a name?
I don't know if there is a name for it, or if anyone else has said it first. I only know what I can show: you don't get Bell's theorem if you don't adopt local realism. That means any approach that rejects the assumption that a system comes compete with a set of unique probabilities for each set of observations that could be done on it, including the assumption that those probabilities are independent of each other, suffices to make that theorem irrelevant. For example, see http://drchinese.com/David/Bell_Theorem_Easy_Math.htm, where the assumption I refer to is called "local hidden variables."

So in other words, we are talking about our options for not adopting local hidden variables. What I am critiquing is the philosophy that says if we don't have local hidden variables, we must have nonlocal hidden variables. I say drop the whole idea that we have hidden variables, meaning variables "hidden in the parts", where those variables determine unique probabilities for any observations on the parts, independent of any observations anywhere else, and can only be changed by "propagating influences." Instead, just say that a system is a preparation together with the mapping it produces, where that mapping is a map between any set of hypothetical observations you can name, and the associated set of probabilities. That's what quantum mechanics actually does, so my approach is a kind of "minimalist" philosophy applied to scientific interpretation.
 
  • #72
481
55
The fact that Bell's theorem only rules out (forward causal, one world, non-superdeterministic, etc...) local realism doesn't mean local non-realism with all the other "normal" assumptions is possible. You still have to prove that it is possible. Bell's is a no-go theorem, not a go theorem.

Also, earlier you said:

But a more important issue is, I never said the theory doesn't violate locality, of course it does that. I said it doesn't need to violate it by application of a concept of "FTL or retrocausal influences."
But now you seem to be saying precisely that.
 
  • #73
Ken G
Gold Member
4,438
333
Like the delayed choice quantum eraser?
Explain how a delayed choice quantum eraser "changes" a correlation. The correlations are specified by the preparation. What correlations are is two-point comparisons among measurements. I'm saying that when you have the preparation, you have a concept of a "system", which is just a machine for connecting any set of measurements you can do with all the probability outcomes of those measurements, including their correlations. If you do different measurements, you are invoking different correlations from that set. Nothing "changes."
 
  • #74
Ken G
Gold Member
4,438
333
The fact that Bell's theorem only rules out (forward causal, one world, non-superdeterministic, etc...) local realism doesn't mean local non-realism with all the other "normal" assumptions is possible. You still have to prove that it is possible. Bell's is a no-go theorem, not a go theorem.
True, but what I'm talking about is clearly a "go" theorem-- it is quantum mechanics. Are you saying quantum mechanics does not allow us to take a preparation, and provide a mapping between any set of measurements, and their probability distribution including all the two-point correlations? Of course that's just what quantum mechanics lets us do. Nothing I've said goes beyond that in any way-- that's the whole point, I am simply not requiring that a "system" acquires any elements at all that quantum mechanics does not empower us to equip our concept of a "system" with.
But now you seem to be saying precisely that.
Yes, I actually edited that statement shortly after saying it, because I realized it wasn't quite what I meant to say. Sorry for creating confusion. What I'm saying is, sometimes we see claims that quantum mechanics forces us to reject locality, but that's only if we insist on clinging to the hidden variables concept associated with what happens when you combine realism with a concept that a system is made of "real parts." The formally true statement, I'm sure you'll agree, is what all we know is we cannot have "local realism." I'm saying locality is a more general and valuable physical principle than the "made of real parts" version of realism, so when we recognize that we can't have local realism, we should keep the locality but jettison the version of realism that says a system is a sum of independent real parts. That picture is not very good with indistinguishability, and it is not very good with entanglement either. So junk it. You don't lose any quantum mechanics by junking that philosophy-- quantum mechanics doesn't need it. This is clear: all quantum mechanics does is tell you how to take a preparation and figure out how to associate a set of outcomes with a set of measurements, including not only probability distributions but also two-point correlations (and higher).
 
  • #75
481
55
Well, personally, I'm not for nonlocal realism because I don't like the LET assumption or non-forward causality. Basically my view is nonlocal nonrealist. I just can't grasp your argument: I don't know about that. Surely I am intellectually limited because otherwise I could acknowledge you or rebuttal further. If I could see a fully laid-out exposition on a paper, or a completely different explanation (maybe written by someone else, since your style doesn't seem to be able to convey the concept to me), that'd help.
 

Related Threads on Long-distance correlation, information, and faster-than-light interaction

  • Last Post
Replies
22
Views
4K
Replies
23
Views
5K
Replies
1
Views
878
Replies
15
Views
1K
Replies
5
Views
764
  • Last Post
Replies
4
Views
511
  • Last Post
Replies
10
Views
2K
Top