Long-distance correlation, information, and faster-than-light interaction

In summary, the conversation discusses the concept of "weirdness" in physics experiments, specifically in regards to the correlation experiment. The four steps from classical communication to FTL communication are mentioned, as well as the potential for the experiment to provide a better understanding of the inherent randomness of photon polarization. Bell's theorem is also brought up as a limitation to certain models of the photon. The idea of information being stored in the system rather than in a specific location is proposed as a way to avoid the need for FTL communication. Overall, the conversation revolves around the need to let go of the concept of information having a physical location in order to fully understand and explain these experiments.
  • #36
zonde said:
I agree that we have to consider measuring instruments too. But if we talk about violation of Bell inequalities and non-locality it does not change anything.
If distant measurement results are determined locally then they can't violate Bell inequality.

Not sure what do you mean by "nonlocal changes" here and what is moving. Maybe you mean measurement settings as rotation of polarizers?

The "faster" in FTL implies speed. Speed implies movement. So here it means, the effect I produce by rotating the polarizer jumps at the ofher side. But if the whole setting, including the measurement instruments, is a whole nonlocal entity, the global changes aren't being transmitted, they're just nonlocal.

I'm just going for a charitable interpretation of the claim "no FTL". Since I asked: a choice here implies a result there and not another, how is there no action? In the end I got the above answer. And that's how I made sense of it. I don't know if it's consistent.
 
Physics news on Phys.org
  • #37
zonde said:
Possible reasons to drop some approach is inconsistency or contradiction with observations.
But possible reasons to drop a philosophy includes that it is leading you to regard what you know will happen to be "weird." That's a failing philosophy.
Treating entangled particles as parts gives consistent picture.
So does treating them not as parts. The only difference is, nothing FTL.
 
  • #38
ddd123 said:
The "faster" in FTL implies speed. Speed implies movement. So here it means, the effect I produce by rotating the polarizer jumps at the ofher side. But if the whole setting, including the measurement instruments, is a whole nonlocal entity, the global changes aren't being transmitted, they're just nonlocal.
Yes, FTL is meaningless if distance is an illusion. But such "distance is illusion" type of nonlocality is very radical idea comparable to solipsism. Distance is very basic concept in our perception of physical world.
 
  • #39
ddd123 said:
I took it to mean: since even the measuring instruments are part of this whole, the nonlocal changes are of the whole too; so in this sense there isn't some "moving" (which the notion of speed implies).
Exacty.
As I said above, this to me seems like a rephrasing and it's not substantially different from just saying it's FTL. But at least it made sense to me with respect to the problem of action.
The reason it is substantially different from FTL isn't the FTL part, it is the "Influence" part. If you want to regard a system as a whole to be a fundamentally FTL entity, I have no objection to that view, my issue is with saying there are propagating influences that are what is FTL there. The "propagating influence" notion is always something that can be intercepted in principle, in every other context where that notion has proven useful. It has also proven to not be FTL in all those other contexts. So when you have a concept that shows two properties in the contexts in which it proved useful (that a propagating influence is not FTL and can be intercepted in principle), yet loses both those properties in some new context we are exporting it into, that is reason to doubt the success of that exportation. Any observations done by Alice do not need to influence observations done by Bob, they are simply part of the system being observed, as are the correlations they exhibit. The assumptions of the Bell inequality do not even come up in that approach.
 
Last edited:
  • #40
zonde said:
Yes, FTL is meaningless if distance is an illusion. But such "distance is illusion" type of nonlocality is very radical idea comparable to solipsism. Distance is very basic concept in our perception of physical world.
Distance need not be dropped to treat a system and its environment as a whole thing. DIstance is simply the scale of that whole thing. What can be dropped, and quite easily, is the idea that Bell inequality violations require "influences" to "move or propagate" across that distance, such that one can support a concept of FTL or retrocausality there. If it's not a cause, it can't be retrocausal. That's why I said Bell's inequality violations are no kind of issue unless one insists on imagining that systems are separate from the environment that establishes the attributes of that system, and unless one insists on regarding systems as composed of local parts that can only achieve global correlations by either "carrying information within those parts", or by "propagating influence between those parts." When that kind of language is not helping us regard behavior we know will happen as normal that it should happen, then we need a different kind of language.
 
  • Like
Likes Mentz114
  • #41
Ken G said:
What can be dropped, and quite easily, is the idea that Bell inequality violations require "influences" to "move or propagate" across that distance, such that one can support a concept of FTL there.
Idea that Bell inequality violations require "influences" to "move or propagate" across that distance is not assumption. It is conclusion. You have to point out where you see a possibility to make reasoning different.
You can start here:
If a theory predicts perfect correlations for the outcomes of distant experiments, then either the theory must treat these outcomes as deterministically produced from common past of these experiments or the theory must violate locality.
Do you see any way to do this reasoning differently?

Ken G said:
That's why I said Bell's inequality violations are no kind of issue unless one insists on imagining that systems are separate from the environment that establishes the attributes of that system
You can include local environment with the system, this does not allow you to violate Bell inequalities.

Ken G said:
When that kind of language is not helping us regard behavior we know will happen as normal that it should happen, then we need a different kind of language.
I don't think this is language issue.
 
  • #42
I got lost in Ken G's later posts, especially regarding the philosophy. To me there is no substantial difference between FTL and holistic extended system: saying otherwise is either falling back upon a wrong notion of locality or ignoring aspects of phenomenology.
 
  • #43
DrChinese said:
... I ran another 100 and the results were 33 at the same angles which were 100% matched; and 67 at a difference of 120 degrees which were 37.31% matched. This model produces the QM statistics ...

Thank you for confirmation that the calculations used in the statistical/quantum section of the animation at http://www.animatedphysics.com/games/photon_longdistance_nonlocality.htm produce QM statistics.

DrChinese said:
... Alice and Bob measure at the same angle, there is an identical difference of γ − λ degrees for both Alice and Bob. That evaluates to something different than 1 or 0 ...

Yes, say it is 0.85. That number is compared to a random number between 0 and 1 to get an outcome of 1 or 0

DrChinese said:
... Please, you are wildly off to offer a model that is so obviously refuted by Bell...

Please note that the quantum simulation ends up with entries in matrix E where Bob and Alice have their angles the same, but end up with different results. The quantum simulation is NOT a bell model because it allows the photon to have this property.

DrChinese said:
I am going to absolutely challenge everything you are saying about recreating the quantum statistics in a computer program using independent calculations. ...

The program is written in Javascript and the entire program can be viewed and checked with a right click and "view source" option within your browser, or some browsers have a "view source" in the menu.

That said, I can itemize some key aspects of the calculations within this program:

To set the photons polarization

var ri = Math.random();
photonangle = Math.round(ri*360);
To set Bob and Alice polarizer angles

aliceangle = "0"
var ri = Math.random()
if (ri > .6666666) { aliceangle = "120" } else if (ri > .3333333) { aliceangle = "240" }
bobangle = "0"
ri = Math.random()
if (ri > .6666666) { bobangle = "120" } else if (ri > .3333333) { bobangle = "240" }
To check if Bob and Alice get detection events

Statistical/Quantum method

var probofhit = (Math.cos((rphotonangle - rbobangle)*2)+1)/2;
var ri = Math.random();
if (ri<probofhit) { bobhit = true };

var probofhit = (Math.cos((rphotonangle - raliceangle)*2)+1)/2;
var ri = Math.random();
if (ri<probofhit) { alicehit = true };
Classically ended up a little messy ..

if (bobangle == "0") {
if ( ((parseInt(photonangle) <= 45) || (parseInt(photonangle) > 315)) || ((parseInt(photonangle) <= 225) && (parseInt(photonangle) > 135)) ) {
bobtotals[0][1] = bobtotals[0][1] + 1; bobhit = true ...
if (bobangle == "120") {
if ( ((parseInt(photonangle) <= 165) && (parseInt(photonangle) > 75)) || ((parseInt(photonangle) <= 345) && (parseInt(photonangle) > 255)) ) {
bobtotals[1][1] = bobtotals[1][1] + 1; bobhit = true ...
if (bobangle == "240") {
if ( ((parseInt(photonangle) <= 285) && (parseInt(photonangle) > 195)) || ((parseInt(photonangle) <= 105) && (parseInt(photonangle) > 15)) ) {
bobtotals[2][1] = bobtotals[2][1] + 1; bobhit = true ...

if (aliceangle == "0") {
if ( ((parseInt(photonangle) <= 45) || (parseInt(photonangle) > 315)) || ((parseInt(photonangle) <= 225) && (parseInt(photonangle) > 135)) ) {
alicetotals[0][1] = alicetotals[0][1] + 1; alicehit = true ...
if (aliceangle == "120") {
if ( ((parseInt(photonangle) <= 165) && (parseInt(photonangle) > 75)) || ((parseInt(photonangle) <= 345) && (parseInt(photonangle) > 255)) ) {
alicetotals[1][1] = alicetotals[1][1] + 1; alicehit = true ...
if (aliceangle == "240") {
if ( ((parseInt(photonangle) <= 285) && (parseInt(photonangle) > 195)) || ((parseInt(photonangle) <= 105) && (parseInt(photonangle) > 15)) ) {
alicetotals[2][1] = alicetotals[2][1] + 1; alicehit = true ...​
 
  • #44
edguy99 said:
That said, I can itemize some key aspects of the calculations within this program:

To set the photons polarization

var ri = Math.random();
photonangle = Math.round(ri*360);
To set Bob and Alice polarizer angles

aliceangle = "0"
var ri = Math.random()
if (ri > .6666666) { aliceangle = "120" } else if (ri > .3333333) { aliceangle = "240" }
bobangle = "0"
ri = Math.random()
if (ri > .6666666) { bobangle = "120" } else if (ri > .3333333) { bobangle = "240" }
To check if Bob and Alice get detection events

Statistical/Quantum method

var probofhit = (Math.cos((rphotonangle - rbobangle)*2)+1)/2;
var ri = Math.random();
if (ri<probofhit) { bobhit = true };

var probofhit = (Math.cos((rphotonangle - raliceangle)*2)+1)/2;
var ri = Math.random();
if (ri<probofhit) { alicehit = true };

Is this your code or not? Regardless:

The local realistic simulation is fine for the hypothetical polarization and the selection of Alice and Bob's angles.

The bolded section is where the problem lies. For example: Cos(rphotonangle - rbobangle) * 2 as you wrote is not what is in the code. The code shows Cos(rphotonangle - rbobangle) ^ 2 instead. That yields a range of 0 to 1. When you add 1 and divide by 2, you get a range from .5 to 1. You then compare to a random number between 0 an 1. Oops. That yields HITS about 75% of the time. Should be 50%. And yet that is not what is being reported. The actual hits (Reds vs Blues) is fairly close to 50%. Something is wrong here.

And then there is the issue when there is a photon angle like 45 degrees and Alice and Bob are set at 0 degrees. The formula evaluates the same for both as usual, .75. That is compared to a random number for Alice and Bob. Statistically that should produce the same outcome 62.5% of the time. But then would be different 37.5% of the time. So there should be a reasonable number of mismatches in the E matrix. I have not seen a single one. Something is wrong.

Something is rotten. I am not going to debug this logic, but you are making a claim that cannot be substantiated. Your model is perhaps the most basic local realistic model, and does not work.
 
  • #45
edguy99 said:
Thank you for confirmation that the calculations used in the statistical/quantum section of the animation at http://www.animatedphysics.com/games/photon_longdistance_nonlocality.htm produce QM statistics.
The CHSH and Bell inequalities are identities that cannot be violated algorithmically the way you are trying. You might as well try to show that 1+1=3. That would clearly indicate an error somewhere. I conjecture that to violate an identity you need to build some cheating into your code. Like a deliberately switching of a result if a certain condition occurs. No-one has ever found the cheat and it is possible that one does not exist.

There is very interesting analysis of EPR and Bell in this [/PLAIN] book by Asher Peres[/PLAIN] ( and it's free ).
 
Last edited by a moderator:
  • #46
zonde said:
Idea that Bell inequality violations require "influences" to "move or propagate" across that distance is not assumption. It is conclusion. You have to point out where you see a possibility to make reasoning different.
Such a possibility is what I have said already. To get the Bell inequality, the key assumption is that the system must be in a "complete state" that allows its parts to produce probability distributions for any measurements put on them, independently of any other measurement on any other part of the system. Individual measurements are always consistent with this expectation, but not the correlations between them in entangled systems. Thus, the claim about the probability distributions is wrong, but can be relaxed in several ways and still accommodate the results, even though none of these accomodations are themselves testable-- they are a matter of personal taste. One is to say that when a measurement is done on one part of the system, some kind of "retrocausal influence" propagates to other measurements and alters their probability distributions. For obvious reasons, this is an awkward language for attributing the observed behavior!

Another approach is to simply say that the measurements done on the system are part of what determines the probability distributions of those measurements. This is, after all, just what we see with entangled systems, it was only ever us that said these systems had to comprise of independent probability distributions. We know they do not, so why force "influences" down their throat, when we can just say what we see to be true: the probabilities depend on the measurements we choose because part of what we mean by the system is how the outcomes of chosen measurements interact with the preparation. That's really all physics ever concerns itself with, the idea that you need influences to alter the independent probabilities is merely a holdover from other contexts where that actually works.
You can start here:
If a theory predicts perfect correlations for the outcomes of distant experiments, then either the theory must treat these outcomes as deterministically produced from common past of these experiments or the theory must violate locality.
Do you see any way to do this reasoning differently?
First of all, entanglement is never about perfect correlations, because you don't get Bell violations with perfect correlations. But a more important issue is, I never said the theory doesn't violate locality, of course it does that. I said it doesn't need to violate it by application of a concept of "FTL or retrocausal influences." One simply says that to talk meaningfully about a system, we need more than its preparation, we need to know what attributes of that system are being established by measurement, and the probabilities it will exhibit depend on the way the measurements establish those attributes. Again, outcomes are an interaction between preparation and measurement choices, no influences needed, FTL or otherwise. I'd say that same thing even if the speed of light was infinite-- there's no magical "influence" there either way, its causality violations are just a good clue we made a bad choice of language.
 
  • Like
Likes morrobay
  • #47
DrChinese said:
Is this your code or not? Regardless:

The local realistic simulation is fine for the hypothetical polarization and the selection of Alice and Bob's angles.

The bolded section is where the problem lies. For example: Cos(rphotonangle - rbobangle) * 2 as you wrote is not what is in the code. The code shows Cos(rphotonangle - rbobangle) ^ 2 instead. That yields a range of 0 to 1. When you add 1 and divide by 2, you get a range from .5 to 1. You then compare to a random number between 0 an 1. Oops. That yields HITS about 75% of the time. Should be 50%. And yet that is not what is being reported. The actual hits (Reds vs Blues) is fairly close to 50%. Something is wrong here.

And then there is the issue when there is a photon angle like 45 degrees and Alice and Bob are set at 0 degrees. The formula evaluates the same for both as usual, .75. That is compared to a random number for Alice and Bob. Statistically that should produce the same outcome 62.5% of the time. But then would be different 37.5% of the time. So there should be a reasonable number of mismatches in the E matrix. I have not seen a single one. Something is wrong.

Something is rotten. I am not going to debug this logic, but you are making a claim that cannot be substantiated. Your model is perhaps the most basic local realistic model, and does not work.

Not sure where you got the square, but the code in the program and as above has no square in it. var probofhit = (Math.cos((rphotonangle - raliceangle)*2)+1)/2;

An example may help clarify. 0° difference implies a 1.0 probability, 45° difference implies a 0.5 probability, 90° difference implies a 0.0 probability.

WRT the mismatches in the E matrix, the first 3 animations uses the classical Bell style that has no mismatches. The last (4th) animation uses the formula above and models the quantum state. You will see mismatches on the 4th animation.

I am considering putting a button on each of the animations allowing you to choose if you want classical or quantum, maybe that would help clarify the quantum vs classical.Edit: after posting this, I see the * (star meaning multiply) looks a little like a ^ (meaning to a power of). Just to be clear, the star (*) in javascipt means multiply.
 
  • #48
edguy99 said:
Not sure where you got the square, but the code in the program and as above has no square in it. var probofhit = (Math.cos((rphotonangle - raliceangle)*2)+1)/2;

An example may help clarify. 0° difference implies a 1.0 probability, 45° difference implies a 0.5 probability, 90° difference implies a 0.0 probability.

WRT the mismatches in the E matrix, the first 3 animations uses the classical Bell style that has no mismatches. The last (4th) animation uses the formula above and models the quantum state. You will see mismatches on the 4th animation.

I am considering putting a button on each of the animations allowing you to choose if you want classical or quantum, maybe that would help clarify the quantum vs classical.

We are not going to discuss your computer program here, which is flawed to the extent you think it is representative of local realism. As simple as I can say it: publish it in a peer reviewed journal. We have been through hundreds of attempts to use this forum to push local realistic concepts, and they all get shut down. Either close this line off yourself, or you next reference to it will be reported to the moderators.
 
  • #49
DrChinese said:
We are not going to discuss your computer program here, which is flawed to the extent you think it is representative of local realism. As simple as I can say it: publish it in a peer reviewed journal. We have been through hundreds of attempts to use this forum to push local realistic concepts, and they all get shut down. Either close this line off yourself, or you next reference to it will be reported to the moderators.

Ouch!
 
  • #50
edguy99 said:
Ouch!

Edguy, it's a very nice program. I wish I had written it myself (I am a software developer). And it does have merits from an instructive point of view just as a visual aid - which is how I think it is intended.

But trying to push something that claims to match QM and is "local realistic" is going to be a violation of PhysicsForums guidelines regarding speculation and unpublished works (ie what is an acceptable source).

If you want to understand why your model does not match QM (and actual experiments as well), then PM me your email address and I will explain offline. Just because the simulation works does not mean it passes all the tests.
 
  • #51
Ken G said:
Another approach is to simply say that the measurements done on the system are part of what determines the probability distributions of those measurements. This is, after all, just what we see with entangled systems, it was only ever us that said these systems had to comprise of independent probability distributions. We know they do not, so why force "influences" down their throat, when we can just say what we see to be true: the probabilities depend on the measurements we choose because part of what we mean by the system is how the outcomes of chosen measurements interact with the preparation. That's really all physics ever concerns itself with, the idea that you need influences to alter the independent probabilities is merely a holdover from other contexts where that actually works.

First of all, entanglement is never about perfect correlations, because you don't get Bell violations with perfect correlations. But a more important issue is, I never said the theory doesn't violate locality, of course it does that. I said it doesn't need to violate it by application of a concept of "FTL or retrocausal influences." One simply says that to talk meaningfully about a system, we need more than its preparation, we need to know what attributes of that system are being established by measurement, and the probabilities it will exhibit depend on the way the measurements establish those attributes. Again, outcomes are an interaction between preparation and measurement choices, no influences needed, FTL or otherwise. I'd say that same thing even if the speed of light was infinite-- there's no magical "influence" there either way, its causality violations are just a good clue we made a bad choice of language.
Well yes, there is another explanation that works. It's called superdeterminism. Unfortunately it's non-scientific. If that is what you mean (as it seems to me) then it's no real alternative to FTL.

Btw As you refer to FTL and retrocausal influences together do you consider them equally unacceptable?
 
  • #52
DrChinese said:
... claims to match QM and is "local realistic" is going to be a violation of PhysicsForums guidelines regarding speculation and unpublished works (ie what is an acceptable source)...

I'm sorry, I will try to be more careful. I actually had not even checked if it did match QM till you mentioned it. I was pleasantly surprised to hear you say it did.

I have been concentrated on the process of constructing the matrices A, B, E, and F. The only reason to run the animation is to generate the matrices, but it is nice to highlight the time dependent things that are happening, ie 1/ random photon shot, 2/ Bob/Alice detectors randomly set, 3/ Detection recorded in the matrices. Repeat until done.

I found the E matrix interesting as I had not realized the importance of the two 0's in the E matrix in the Bell test until I did the animation.

Also, I have converted the animation to do only "statistical" photons (ie. it was stage 1,2,3 was classical, 4 was statistical. Now only 3 stages all statistical). Now you can see some numbers showing up in matrix E when doing the single shot animation.

BTW, It would be great if you were interested to try the functions in post #44 and generate a few sets of matrices to have a look at :)
 
  • #53
zonde said:
Well yes, there is another explanation that works. It's called superdeterminism.
No, what I just described is not superdeterminism. What I just described is pure science. Let me repeat it to prove that.

In scientific thinking, any "system" can be regarded as a kind of machine for connecting a preparation with a set of outcomes for any set of measurements, where what I mean by "outcomes" can include probability distributions and not just definite predictions. That's it, that's what I said. There is no scientific requirement whatsoever that the "system" be able to be analyzed in terms of parts, it just happens that this is often possible. There is no scientific requirement whatsoever that the probability distributions be independent of each other, in the sense that the preparation must determine what those distributions are, independent of the actual set of measurements that are used-- it just so happens that this is often possible. And finally, there is no scientific requirement whatsoever for us to say that whenever we find correlations between measured outcomes that depend on what measurements are done, there has to be "influences" that propagate across the system, though again, there are situations where that is a useful way to think of what is happening (generally situations where the "influences" are slower than light and could in principle be intercepted, i.e., when the influences are themselves part of the measurable system). In all cases, what is happening here is simply that models that have worked for us in various contexts, models like "parts" and "independence" and "influences", are simply being mistaken for scientific requirements. But science is not defined by the model, it is defined by the method.
Btw As you refer to FTL and retrocausal influences together do you consider them equally unacceptable?
Yes, I see no particularly important distinction between crossing one light-cone boundary versus crossing two of them. Either way, it's not the way the time-honored model we call an "influence" works. My main objection to that language is it demonstrably forces people to conclude that a behavior they are sure will happen is "weird." That's the signpost of a failing picture. What is weird is supposed to be what we do not expect, rather than what we do expect.
 
Last edited:
  • #54
Ken G said:
Let me repeat it to prove that.

I really can't understand you Ken G. Could you address my question directly?

I don't even know if I understood your premises here but this is what it seems to me that you're saying: the resulting probability distribution depends on the holistic system (which includes the measurement instruments). I've made a picture:

uhDH7Ub.jpg


It doesn't matter if the polarizers are part of the holistic system: sure, rotating one changes the outcome for probability distribution, but also at the other side. A choice here impacts there, because the system is extended in space. You can say the whole system changes nonlocally, so to avoid using the term FTL. But it's only a nominal difference. It may be verbally nicer.

Am I missing something?
 
  • #55
Ken G said:
There is no scientific requirement whatsoever that the "system" be able to be analyzed in terms of parts, it just happens that this is often possible.
If we can make observations in terms of parts then of course we can analyze these observations.

Ken G said:
There is no scientific requirement whatsoever that the probability distributions be independent of each other, in the sense that the preparation must determine what those distributions are, independent of the actual set of measurements that are used-- it just so happens that this is often possible.
Actually no superdeterminism assumptions requires that preparation is independent from reasonably randomized measurement settings.

Ken G said:
And finally, there is no scientific requirement whatsoever for us to say that whenever we find correlations between measured outcomes that depend on what measurements are done, there has to be "influences" that propagate across the system, though again, there are situations where that is a useful way to think of what is happening (generally situations where the "influences" are slower than light and could in principle be intercepted, i.e., when the influences are themselves part of the measurable system).
There is such requirement. That requirement is called falsifiability. That means there should be valid observation that could indicate FTL "influence" and falsify "no FTL" hypothesis even if we can't create that FTL phenomena artificially.

Ken G said:
In all cases, what is happening here is simply that models that have worked for us in various contexts, models like "parts" and "independence" and "influences", are simply being mistaken for scientific requirements. But science is not defined by the model, it is defined by the method.
Science is as well defined by requirements that scientific models have to fulfill.

Ken G said:
Yes, I see no particularly important distinction between crossing one light-cone boundary versus crossing two of them. Either way, it's not the way the time-honored model we call an "influence" works. My main objection to that language is it demonstrably forces people to conclude that a behavior they are sure will happen is "weird." That's the signpost of a failing picture. What is weird is supposed to be what we do not expect, rather than what we do expect.
Are your objections about wording or what?

There are two theories and what we expect from one theory contradicts what we consider possible in the other theory. If you say this signpost of a failing picture ... well I agree.
 
  • #56
ddd123 said:
You can say the whole system changes nonlocally, so to avoid using the term FTL. But it's only a nominal difference. It may be verbally nicer.

Am I missing something?
Yes. The concept of "FTL" involves the concept "faster." That involves the concept of speed, which involves the concept of movement or propagation. Also, when people use the term FTL, as in this thread, they generally couple it to the term "influence." So this is what I am saying has no need to be present in entanglement: the concept of a propagating influence. That's why I said it's not the "FTL" itself I object to, if all you mean by that is once you've embraced realism, you are stuck with nonlocality-- that is what we know to be true. But we don't have to leave it at that, if we still have something that seems "weird"-- we have something else too. We have the repudiation of the idea that a system is made of parts that connect via "propagating influences". That latter notion is simply a view that often works for us, so we mistake it for some kind of universal truth, or worse, a universal requirement for scientific investigation.

So when you say the system "changes nonlocally" when a measurement choice is made, even that implies a hidden assumption. You see that the measurement is applied at one place and time, and the system changes everywhere. But that isn't supported by the data either. The most general meaning of "a system" is a mapping between a preparation and a set of measurement probability distributions-- after all, that's all that is involved in empirical science. Notice that if you define the "system" that way, it never "changes" nonlocally! To get rid of "nonlocality", one merely needs to define a system the way I just did, and poof, it's gone. The nonlocality comes from requiring that the system must exist independently of the measurements that establish its attributes, that's why it is actually "nonlocal realism," i.e., the combination of locality and realism, that is ruled out-- not either one individually. Personally, I find it much more scientific to define a system the way I just did, but this does not get called "realism" because the measurements are included in the meaning of the system.
 
  • #57
I don't understand the definition. One thing is not slicing the system into parts in the sense that they're not taken to be independent; another is ignoring spatiotemporal extension and events. Are you giving up the notion of spacetime events? What do you substitute to that?
 
  • #58
zonde said:
If we can make observations in terms of parts then of course we can analyze these observations.
What do you mean by "observations in terms of parts?" What we have are observations at various places and times, that's it-- that's not parts. The "parts" are your personal philosophy here, nothing more. Now, it's certainly not a rare philosophy, and the "parts" model certainly works very well for us in almost all situations. But not all, that's the point. Hence, we should question that personal philosophy about observations of systems. It is the observations that have a place and time, not the "parts of the system." That's why I brought up white dwarf stars, they make this point really clear, when you have something the size of a planet that cannot be analyzed in terms of "observations on parts" because the "parts" are indistinguishable from each other.

Actually no superdeterminism assumptions requires that preparation is independent from reasonably randomized measurement settings.
I haven't said anything about superdeterminism other than that I am not talking about superdeterminism, nor have you shown that what I am saying is tantamount to superdeterminism.
There is such requirement. That requirement is called falsifiability.
Then you just vioalated falsifiability with your first sentence above. What is falsifiable about the claim that observations at a place and time are "observations in terms of parts", if entanglement is not a falsification of that claim, and indistinguishability in white dwarfs is not a falsification of that claim? Anyone who thinks observations are "observations in terms of parts" (which I admit, is pretty much everyone before they learn quantum mechanics) is going to find both entanglement and fermionic indistinguishability "weird." But weirdness is not falsification, it is more like repudiation.

That means there should be valid observation that could indicate FTL "influence" and falsify "no FTL" hypothesis even if we can't create that FTL phenomena artificially.
Are you saying there is such a valid observation? If not, it means we are deciding between personal philosophies, and I've never claimed otherwise.
There are two theories and what we expect from one theory contradicts what we consider possible in the other theory. If you say this signpost of a failing picture ... well I agree.
I don't see what two contradicting theories you have in mind here. If anyone had been talking about slower-than-light influences that can be intercepted and studied as part of the system, then we'd be talking about a theory that makes different predictions than quantum mechanics, but no one is talking about a theory like that.
 
  • #59
To me the white dwarf example is absolutely the same, so I kept using the EPR one. You remove an atom from one side, suddenly, placing the atom on the other side becomes possible. It's exactly the same as in my picture above, with an atom placer-remover instead of the polarizer.
 
  • #60
ddd123 said:
I don't understand the definition. One thing is not slicing the system into parts in the sense that they're not taken to be independent; another is ignoring spatiotemporal extension and events. Are you giving up the notion of spacetime events?
I have in no way given up the notion of spacetime events, I have merely noticed what is in evidence: the spacetime events are the observations, not the parts of the system. The parts of the system needs to mean nothing more than what the system is composed of, which is all part of the preparation. If you do an observation at a given place and time, it is certainly important that the observation occurred at a given place and time, and you will never need to use any other concept of "parts." Once the measurement is made, you can regard that as a new preparation, a new system, and a new mapping from that total preparation to new outcomes. Those new probabilities can change instantly because your information has changed instantly, that's "Bertlmann's socks." Nonlocality only appears when you combine with realism, i.e., combine with the picture that there exists a unique probability distribution that goes with each measurement independently of all other measurements outside the past light cone of that one. Local realism says you might not know what that unique distribution is, and any new information you get that constrains the preparation could cause you to reassess that unique probability, but you hold that it still exists unchanged. But if you don't think of a system as a list of unique probabilities, but rather a machine for connecting a preparation with a set of probabilities associated with any set of measurements you could in principle do, then you have no problems with nonlocality.
 
  • #61
ddd123 said:
To me the white dwarf example is absolutely the same, so I kept using the EPR one. You remove an atom from one side, suddenly, placing the atom on the other side becomes possible. It's exactly the same as in my picture above, with an atom placer-remover instead of the polarizer.
Yes, I agree we have the same issue in that example, I just think it is an example where it is even more clear that the "propagating influence" philosophy is not a good one. The reason I say it is "not good" is that it leads us to imagine what we know will happen to be "weird." That's what I mean by a "not good' philosophy. There are not individual electrons in a white dwarf, so they don't have locations, they are indistinguishable from each other. What has a location is the measurements that we do on a sea of electrons.
 
  • #62
Even if you consider the set of measurements as a mapping's input, the elements of this set correspond to spacetime events: so a probability distribution output, as observed in one spacetime region, depends on the input in another instantaneously.

Ken G said:
What has a location is the measurements that we do on electrons.

Right, that's why it doesn't change anything for me. Even without realism you still have nonlocality without relaxing other assumptions (like forward causality).
 
  • #63
ddd123 said:
Even if you consider the set of measurements as a mapping's input, the elements of this set correspond to spacetime events: so a probability distribution output, as observed in one spacetime region, depends on the input in another instantaneously.
There's nothing strange about that, it happens with a pair of socks. But what is significant is that for the socks, the nonlocal change in the probability distribution only appears to be nonlocal if you think the probability distribution exists at the location of the measurement. If you recognize that the location of a probability distribution is in the brain of the person using it, again there is never any nonlocality. My point is it is very easy to make nonlocality go away, all you need to reject is nonlocal realism, that combination that allows us to imagine a set of independent parts that have probability distributions carried around with the parts, and subject to subluminally propagating influences. That picture doesn't work well at all for either white dwarfs or EPR experiments, because the "propagating influences" have to do things like propagate back in time. So I say, jettison that picture, and generalize the meaning of a system to be nothing more than we can observe it to be: a mapping from a preparation to a set of outcomes associated with each hypothetical set of measurements we could do. Nothing more, and nothing less. If you just say that, there's never any nonlocality. Personally, I don't think it's "weird" to stick with what we actually need, especially when we get no nonlocality when we do that.
Right, that's why it doesn't change anything for me. Even without realism you still have nonlocality without relaxing other assumptions (like forward causality).
No, you never get nonlocality without realism. That's why the Bell theorem is said to rule out local realism, it is not said to rule out locality.
 
Last edited:
  • #64
Ah I think I'm understanding you more now. But for me it's not that the probability distribution exists at the spacetime location. What exist in the spacetime region is the measurement results. The probability distribution is asymptotically reconstructed with a large number of events. The hypothetical concerning measurement choices here involves a different observed result there. Probability distribution is a conclusion, the physical events are laid out and the hypotheticals involved display an instantaneous "if here then there".

BTW I wasn't talking about Bell's theorem.
 
  • #65
Ken G said:
That's why the Bell theorem is said to rule out nonlocal realism
You mean it rules out local realism? It proves that no local variables can even possibly meet outcomes observed.
 
  • #66
ddd123 said:
Ah I think I'm understanding you more now. But for me it's not that the probability distribution exists at the spacetime location. What exist in the spacetime region is the measurement results.
That's fine, you can have the measurement results exist at their location, and still keep locality. In fact, that's how you keep locality-- measurements have the location of the measurement, predictive probabilities have the location of the brain using them. Stick to that, and add nothing more that you don't need, and you get no nonlocality in either white dwarfs or EPR experiments.
The hypothetical concerning measurement choices here involves a different observed result there. Probability distribution is a conclusion, the physical events are laid out and the hypotheticals involved display an instantaneous "if here then there".
There are two types of probability distributions. One is an expectation in the mind of the physicist, which is to be checked, the other is a distribution of outcomes, which is not actually a probability distribution but it is what the probability distribution is to be checked against, so in the sense of comparing apples to apples, we can call that a probability distribution as well. The point is, if you never go beyond asserting that the former has the location of a brain and the time the brain is using it, and the latter has the location of a measuring apparatus and the time of the measurement, then you never need any nonlocality. The need for locality doesn't come from that, it comes from the claim that there exists a unique probability distribution for any hypothetical but possible measurement at any place and time, and that unique distribution is independent of everything going on outside its past light cone. That's all you need to jettison to restore the proper locality of everything-- get rid of the "propagating influences" requirement.
BTW I wasn't talking about Bell's theorem.
We can just as easily talk about white dwarfs, everything I said above applies there also.
 
  • #67
But, while the hypothesized probability distribution exists only abstractly, the measurement choices and results are real and localized. And the observed correlation frequencies do change instantaneously on one side when changing the measurement choices on the other side.
 
  • #68
ddd123 said:
But, while the hypothesized probability distribution exists only abstractly, the measurement choices and results are real and localized. And the observed correlation frequencies do change instantaneously on one side when changing the measurement choices on the other side.
No-- when a system is viewed as a mapping between a preparation and any set of possible measurements, that automatically includes all their correlations. They are all set by the preparation, so the correlations never "change". What is a change in a correlation?
 
  • #69
Is there an article/book where this is looked at in detail? Or, does this position you're explaining have a name?
 
  • #70
Ken G said:
What is a change in a correlation?
Like the delayed choice quantum eraser?
 

Similar threads

Replies
2
Views
948
Replies
16
Views
1K
Replies
40
Views
3K
Replies
5
Views
921
  • Quantum Physics
Replies
4
Views
793
  • Quantum Physics
7
Replies
225
Views
11K
  • Quantum Physics
Replies
4
Views
722
  • Quantum Physics
3
Replies
87
Views
5K
Replies
15
Views
2K
Replies
19
Views
2K
Back
Top