Long-distance correlation, information, and faster-than-light interaction

Click For Summary
The discussion revolves around the nature of long-distance correlations in quantum mechanics, particularly in the context of Bell's theorem and faster-than-light (FTL) communication. Participants explore the implications of photon polarization and the randomness inherent in quantum measurements, debating whether FTL influences are necessary to explain observed correlations. The conversation emphasizes that while correlations exist, attributing them to an influence between distant measurements is problematic and may misrepresent the underlying physics. There is a call for further investigation into the randomness of photon behavior without resorting to FTL theories, suggesting that understanding these correlations requires a reevaluation of how information is conceptualized in quantum experiments. The thread ultimately highlights the complexities of interpreting quantum entanglement and the limitations of classical models in explaining quantum phenomena.
  • #31
Ken G said:
You see, in my view, a question never asked to a system, by an actual apparatus capable of answering that question, is a question that is not answered by that system. So I don't think of a system as a kind of "answer man", just waiting to be asked any question (that's standard realism, local or otherwise), I view the answers established by the apparatus as part of the system.
From your point of view, what is the significance of the fact that in classical mechanics, we can view the system as the "answer man"?
 
Physics news on Phys.org
  • #32
Ken G said:
In short, the reason the system violates Bell's inequality is that if different questions are put to the system, it's a different system, because the system is part of the full reality there. So all you do is relax not only the local realism we often try to attach to the parts of the system, but also the local realism we attach to the system as independent from the environment that establishes the facts of the system. If you do this, you never need anything FTL, because Bell violations are no particular problem.
Relaxing local "realism" (local predeterminism) allows violation of Bell's inequalities only as much as it allows non-locality (FTL).
So what you are saying does not really make any sense.
 
  • #33
zonde said:
Relaxing local "realism" (local predeterminism) allows violation of Bell's inequalities only as much as it allows non-locality (FTL).
I took it to mean: since even the measuring instruments are part of this whole, the nonlocal changes are of the whole too; so in this sense there isn't some "moving" (which the notion of speed implies). As I said above, this to me seems like a rephrasing and it's not substantially different from just saying it's FTL. But at least it made sense to me with respect to the problem of action.
 
  • #34
edguy99 said:
There is no FTL mechanism used in this animation, it is structured in a step by step method so its path though time can be tracked, give it a try as it can generate larger photon runs now. As to why post it here, the original poster of this thread laid out some parameters to generate the matrices used in analyzing photon polarization, the animation creates these matrices and saves a lot of calculation time - try it out if you get a chance.

The animation uses 2 different methods of calculating what happens when a photon interacts with a polarizer. Both the classical and the statistical/quantum method use only the angle of their own polarizer and the photon that hits it to determine the outcome, hence the calculation does not rely on the other photons orientation or the setting of the other polarizer to determine the outcome - hence no FTL communication for either animation method.

The "statistical/quantum" imagines spin with precession. ...

For the "statistical/quantum animation", whether or not this object gets through a polarizer depends on these equations, again, with no reference to the other photon orientation or the other polarizer angle.
  • Chance of vertical measurement = (cos((γ − λ)*2)+1)/2
  • Chance of horizontal measurement = (cos((γ − λ + π/2)*2)+1)/2
... Again, the important feature of the animation is that no FTL calculation or principle is used.

I am going to absolutely challenge everything you are saying about recreating the quantum statistics in a computer program using independent calculations. That is impossible.

I checked the stats in the referenced simulation for the quantum case by performing a run of 100. There were 28 cases where Alice and Bob's angles were selected the same, and all 28 (100%) yielded matches as expected. There were 72 cases where the selected angles were different, and 22 of those (30.55%) were matches. The local realistic max is 25% and the QM prediction is 33.33%. You can run this yourself, and I am sure as long as you do it you will see the following: 100% matches when the angle is the same, and something close to 33.33% matched when the angles for Alice and Bob are 120 degrees different.

But guess what? When the angles are the same, that is NOT the statistics you get when there is evaluation of the formulae you supplied. Surely you can see that when Alice and Bob measure at the same angle, there is an identical difference of γ − λ degrees for both Alice and Bob. That evaluates to something different than 1 or 0 - the values which would be required for a certain matching. So there would be some cases where a measurement at the same angle would yield a mismatch instead of a match. That doesn't ever happen in the simulation, not even once, for the quantum statistics.

I ran another 100 and the results were 33 at the same angles which were 100% matched; and 67 at a difference of 120 degrees which were 37.31% matched. This model produces the QM statistics and has both Alice and Bob's setting as part of the algorithm; there is NOT independent calculation for Alice and Bob's outcomes. In the rules of simulations, there is non-local influence.

Please, you are wildly off to offer a model that is so obviously refuted by Bell. This is far outside generally accepted physics, and is personal speculation that should not be presented here. You couldn't even hand pick values and make it work out. That is the "DrChinese" challenge, in fact.
 
Last edited:
  • #35
ddd123 said:
I took it to mean: since even the measuring instruments are part of this whole, the nonlocal changes are of the whole too; so in this sense there isn't some "moving" (which the notion of speed implies). As I said above, this to me seems like a rephrasing and it's not substantially different from just saying it's FTL. But at least it made sense to me with respect to the problem of action.
I agree that we have to consider measuring instruments too. But if we talk about violation of Bell inequalities and non-locality it does not change anything.
If distant measurement results are determined locally then they can't violate Bell inequality.

Not sure what do you mean by "nonlocal changes" here and what is moving. Maybe you mean measurement settings as rotation of polarizers?
 
  • #36
zonde said:
I agree that we have to consider measuring instruments too. But if we talk about violation of Bell inequalities and non-locality it does not change anything.
If distant measurement results are determined locally then they can't violate Bell inequality.

Not sure what do you mean by "nonlocal changes" here and what is moving. Maybe you mean measurement settings as rotation of polarizers?

The "faster" in FTL implies speed. Speed implies movement. So here it means, the effect I produce by rotating the polarizer jumps at the ofher side. But if the whole setting, including the measurement instruments, is a whole nonlocal entity, the global changes aren't being transmitted, they're just nonlocal.

I'm just going for a charitable interpretation of the claim "no FTL". Since I asked: a choice here implies a result there and not another, how is there no action? In the end I got the above answer. And that's how I made sense of it. I don't know if it's consistent.
 
  • #37
zonde said:
Possible reasons to drop some approach is inconsistency or contradiction with observations.
But possible reasons to drop a philosophy includes that it is leading you to regard what you know will happen to be "weird." That's a failing philosophy.
Treating entangled particles as parts gives consistent picture.
So does treating them not as parts. The only difference is, nothing FTL.
 
  • #38
ddd123 said:
The "faster" in FTL implies speed. Speed implies movement. So here it means, the effect I produce by rotating the polarizer jumps at the ofher side. But if the whole setting, including the measurement instruments, is a whole nonlocal entity, the global changes aren't being transmitted, they're just nonlocal.
Yes, FTL is meaningless if distance is an illusion. But such "distance is illusion" type of nonlocality is very radical idea comparable to solipsism. Distance is very basic concept in our perception of physical world.
 
  • #39
ddd123 said:
I took it to mean: since even the measuring instruments are part of this whole, the nonlocal changes are of the whole too; so in this sense there isn't some "moving" (which the notion of speed implies).
Exacty.
As I said above, this to me seems like a rephrasing and it's not substantially different from just saying it's FTL. But at least it made sense to me with respect to the problem of action.
The reason it is substantially different from FTL isn't the FTL part, it is the "Influence" part. If you want to regard a system as a whole to be a fundamentally FTL entity, I have no objection to that view, my issue is with saying there are propagating influences that are what is FTL there. The "propagating influence" notion is always something that can be intercepted in principle, in every other context where that notion has proven useful. It has also proven to not be FTL in all those other contexts. So when you have a concept that shows two properties in the contexts in which it proved useful (that a propagating influence is not FTL and can be intercepted in principle), yet loses both those properties in some new context we are exporting it into, that is reason to doubt the success of that exportation. Any observations done by Alice do not need to influence observations done by Bob, they are simply part of the system being observed, as are the correlations they exhibit. The assumptions of the Bell inequality do not even come up in that approach.
 
Last edited:
  • #40
zonde said:
Yes, FTL is meaningless if distance is an illusion. But such "distance is illusion" type of nonlocality is very radical idea comparable to solipsism. Distance is very basic concept in our perception of physical world.
Distance need not be dropped to treat a system and its environment as a whole thing. DIstance is simply the scale of that whole thing. What can be dropped, and quite easily, is the idea that Bell inequality violations require "influences" to "move or propagate" across that distance, such that one can support a concept of FTL or retrocausality there. If it's not a cause, it can't be retrocausal. That's why I said Bell's inequality violations are no kind of issue unless one insists on imagining that systems are separate from the environment that establishes the attributes of that system, and unless one insists on regarding systems as composed of local parts that can only achieve global correlations by either "carrying information within those parts", or by "propagating influence between those parts." When that kind of language is not helping us regard behavior we know will happen as normal that it should happen, then we need a different kind of language.
 
  • Like
Likes Mentz114
  • #41
Ken G said:
What can be dropped, and quite easily, is the idea that Bell inequality violations require "influences" to "move or propagate" across that distance, such that one can support a concept of FTL there.
Idea that Bell inequality violations require "influences" to "move or propagate" across that distance is not assumption. It is conclusion. You have to point out where you see a possibility to make reasoning different.
You can start here:
If a theory predicts perfect correlations for the outcomes of distant experiments, then either the theory must treat these outcomes as deterministically produced from common past of these experiments or the theory must violate locality.
Do you see any way to do this reasoning differently?

Ken G said:
That's why I said Bell's inequality violations are no kind of issue unless one insists on imagining that systems are separate from the environment that establishes the attributes of that system
You can include local environment with the system, this does not allow you to violate Bell inequalities.

Ken G said:
When that kind of language is not helping us regard behavior we know will happen as normal that it should happen, then we need a different kind of language.
I don't think this is language issue.
 
  • #42
I got lost in Ken G's later posts, especially regarding the philosophy. To me there is no substantial difference between FTL and holistic extended system: saying otherwise is either falling back upon a wrong notion of locality or ignoring aspects of phenomenology.
 
  • #43
DrChinese said:
... I ran another 100 and the results were 33 at the same angles which were 100% matched; and 67 at a difference of 120 degrees which were 37.31% matched. This model produces the QM statistics ...

Thank you for confirmation that the calculations used in the statistical/quantum section of the animation at http://www.animatedphysics.com/games/photon_longdistance_nonlocality.htm produce QM statistics.

DrChinese said:
... Alice and Bob measure at the same angle, there is an identical difference of γ − λ degrees for both Alice and Bob. That evaluates to something different than 1 or 0 ...

Yes, say it is 0.85. That number is compared to a random number between 0 and 1 to get an outcome of 1 or 0

DrChinese said:
... Please, you are wildly off to offer a model that is so obviously refuted by Bell...

Please note that the quantum simulation ends up with entries in matrix E where Bob and Alice have their angles the same, but end up with different results. The quantum simulation is NOT a bell model because it allows the photon to have this property.

DrChinese said:
I am going to absolutely challenge everything you are saying about recreating the quantum statistics in a computer program using independent calculations. ...

The program is written in Javascript and the entire program can be viewed and checked with a right click and "view source" option within your browser, or some browsers have a "view source" in the menu.

That said, I can itemize some key aspects of the calculations within this program:

To set the photons polarization

var ri = Math.random();
photonangle = Math.round(ri*360);
To set Bob and Alice polarizer angles

aliceangle = "0"
var ri = Math.random()
if (ri > .6666666) { aliceangle = "120" } else if (ri > .3333333) { aliceangle = "240" }
bobangle = "0"
ri = Math.random()
if (ri > .6666666) { bobangle = "120" } else if (ri > .3333333) { bobangle = "240" }
To check if Bob and Alice get detection events

Statistical/Quantum method

var probofhit = (Math.cos((rphotonangle - rbobangle)*2)+1)/2;
var ri = Math.random();
if (ri<probofhit) { bobhit = true };

var probofhit = (Math.cos((rphotonangle - raliceangle)*2)+1)/2;
var ri = Math.random();
if (ri<probofhit) { alicehit = true };
Classically ended up a little messy ..

if (bobangle == "0") {
if ( ((parseInt(photonangle) <= 45) || (parseInt(photonangle) > 315)) || ((parseInt(photonangle) <= 225) && (parseInt(photonangle) > 135)) ) {
bobtotals[0][1] = bobtotals[0][1] + 1; bobhit = true ...
if (bobangle == "120") {
if ( ((parseInt(photonangle) <= 165) && (parseInt(photonangle) > 75)) || ((parseInt(photonangle) <= 345) && (parseInt(photonangle) > 255)) ) {
bobtotals[1][1] = bobtotals[1][1] + 1; bobhit = true ...
if (bobangle == "240") {
if ( ((parseInt(photonangle) <= 285) && (parseInt(photonangle) > 195)) || ((parseInt(photonangle) <= 105) && (parseInt(photonangle) > 15)) ) {
bobtotals[2][1] = bobtotals[2][1] + 1; bobhit = true ...

if (aliceangle == "0") {
if ( ((parseInt(photonangle) <= 45) || (parseInt(photonangle) > 315)) || ((parseInt(photonangle) <= 225) && (parseInt(photonangle) > 135)) ) {
alicetotals[0][1] = alicetotals[0][1] + 1; alicehit = true ...
if (aliceangle == "120") {
if ( ((parseInt(photonangle) <= 165) && (parseInt(photonangle) > 75)) || ((parseInt(photonangle) <= 345) && (parseInt(photonangle) > 255)) ) {
alicetotals[1][1] = alicetotals[1][1] + 1; alicehit = true ...
if (aliceangle == "240") {
if ( ((parseInt(photonangle) <= 285) && (parseInt(photonangle) > 195)) || ((parseInt(photonangle) <= 105) && (parseInt(photonangle) > 15)) ) {
alicetotals[2][1] = alicetotals[2][1] + 1; alicehit = true ...​
 
  • #44
edguy99 said:
That said, I can itemize some key aspects of the calculations within this program:

To set the photons polarization

var ri = Math.random();
photonangle = Math.round(ri*360);
To set Bob and Alice polarizer angles

aliceangle = "0"
var ri = Math.random()
if (ri > .6666666) { aliceangle = "120" } else if (ri > .3333333) { aliceangle = "240" }
bobangle = "0"
ri = Math.random()
if (ri > .6666666) { bobangle = "120" } else if (ri > .3333333) { bobangle = "240" }
To check if Bob and Alice get detection events

Statistical/Quantum method

var probofhit = (Math.cos((rphotonangle - rbobangle)*2)+1)/2;
var ri = Math.random();
if (ri<probofhit) { bobhit = true };

var probofhit = (Math.cos((rphotonangle - raliceangle)*2)+1)/2;
var ri = Math.random();
if (ri<probofhit) { alicehit = true };

Is this your code or not? Regardless:

The local realistic simulation is fine for the hypothetical polarization and the selection of Alice and Bob's angles.

The bolded section is where the problem lies. For example: Cos(rphotonangle - rbobangle) * 2 as you wrote is not what is in the code. The code shows Cos(rphotonangle - rbobangle) ^ 2 instead. That yields a range of 0 to 1. When you add 1 and divide by 2, you get a range from .5 to 1. You then compare to a random number between 0 an 1. Oops. That yields HITS about 75% of the time. Should be 50%. And yet that is not what is being reported. The actual hits (Reds vs Blues) is fairly close to 50%. Something is wrong here.

And then there is the issue when there is a photon angle like 45 degrees and Alice and Bob are set at 0 degrees. The formula evaluates the same for both as usual, .75. That is compared to a random number for Alice and Bob. Statistically that should produce the same outcome 62.5% of the time. But then would be different 37.5% of the time. So there should be a reasonable number of mismatches in the E matrix. I have not seen a single one. Something is wrong.

Something is rotten. I am not going to debug this logic, but you are making a claim that cannot be substantiated. Your model is perhaps the most basic local realistic model, and does not work.
 
  • #45
edguy99 said:
Thank you for confirmation that the calculations used in the statistical/quantum section of the animation at http://www.animatedphysics.com/games/photon_longdistance_nonlocality.htm produce QM statistics.
The CHSH and Bell inequalities are identities that cannot be violated algorithmically the way you are trying. You might as well try to show that 1+1=3. That would clearly indicate an error somewhere. I conjecture that to violate an identity you need to build some cheating into your code. Like a deliberately switching of a result if a certain condition occurs. No-one has ever found the cheat and it is possible that one does not exist.

There is very interesting analysis of EPR and Bell in this [/PLAIN] book by Asher Peres[/PLAIN] ( and it's free ).
 
Last edited by a moderator:
  • #46
zonde said:
Idea that Bell inequality violations require "influences" to "move or propagate" across that distance is not assumption. It is conclusion. You have to point out where you see a possibility to make reasoning different.
Such a possibility is what I have said already. To get the Bell inequality, the key assumption is that the system must be in a "complete state" that allows its parts to produce probability distributions for any measurements put on them, independently of any other measurement on any other part of the system. Individual measurements are always consistent with this expectation, but not the correlations between them in entangled systems. Thus, the claim about the probability distributions is wrong, but can be relaxed in several ways and still accommodate the results, even though none of these accomodations are themselves testable-- they are a matter of personal taste. One is to say that when a measurement is done on one part of the system, some kind of "retrocausal influence" propagates to other measurements and alters their probability distributions. For obvious reasons, this is an awkward language for attributing the observed behavior!

Another approach is to simply say that the measurements done on the system are part of what determines the probability distributions of those measurements. This is, after all, just what we see with entangled systems, it was only ever us that said these systems had to comprise of independent probability distributions. We know they do not, so why force "influences" down their throat, when we can just say what we see to be true: the probabilities depend on the measurements we choose because part of what we mean by the system is how the outcomes of chosen measurements interact with the preparation. That's really all physics ever concerns itself with, the idea that you need influences to alter the independent probabilities is merely a holdover from other contexts where that actually works.
You can start here:
If a theory predicts perfect correlations for the outcomes of distant experiments, then either the theory must treat these outcomes as deterministically produced from common past of these experiments or the theory must violate locality.
Do you see any way to do this reasoning differently?
First of all, entanglement is never about perfect correlations, because you don't get Bell violations with perfect correlations. But a more important issue is, I never said the theory doesn't violate locality, of course it does that. I said it doesn't need to violate it by application of a concept of "FTL or retrocausal influences." One simply says that to talk meaningfully about a system, we need more than its preparation, we need to know what attributes of that system are being established by measurement, and the probabilities it will exhibit depend on the way the measurements establish those attributes. Again, outcomes are an interaction between preparation and measurement choices, no influences needed, FTL or otherwise. I'd say that same thing even if the speed of light was infinite-- there's no magical "influence" there either way, its causality violations are just a good clue we made a bad choice of language.
 
  • Like
Likes morrobay
  • #47
DrChinese said:
Is this your code or not? Regardless:

The local realistic simulation is fine for the hypothetical polarization and the selection of Alice and Bob's angles.

The bolded section is where the problem lies. For example: Cos(rphotonangle - rbobangle) * 2 as you wrote is not what is in the code. The code shows Cos(rphotonangle - rbobangle) ^ 2 instead. That yields a range of 0 to 1. When you add 1 and divide by 2, you get a range from .5 to 1. You then compare to a random number between 0 an 1. Oops. That yields HITS about 75% of the time. Should be 50%. And yet that is not what is being reported. The actual hits (Reds vs Blues) is fairly close to 50%. Something is wrong here.

And then there is the issue when there is a photon angle like 45 degrees and Alice and Bob are set at 0 degrees. The formula evaluates the same for both as usual, .75. That is compared to a random number for Alice and Bob. Statistically that should produce the same outcome 62.5% of the time. But then would be different 37.5% of the time. So there should be a reasonable number of mismatches in the E matrix. I have not seen a single one. Something is wrong.

Something is rotten. I am not going to debug this logic, but you are making a claim that cannot be substantiated. Your model is perhaps the most basic local realistic model, and does not work.

Not sure where you got the square, but the code in the program and as above has no square in it. var probofhit = (Math.cos((rphotonangle - raliceangle)*2)+1)/2;

An example may help clarify. 0° difference implies a 1.0 probability, 45° difference implies a 0.5 probability, 90° difference implies a 0.0 probability.

WRT the mismatches in the E matrix, the first 3 animations uses the classical Bell style that has no mismatches. The last (4th) animation uses the formula above and models the quantum state. You will see mismatches on the 4th animation.

I am considering putting a button on each of the animations allowing you to choose if you want classical or quantum, maybe that would help clarify the quantum vs classical.Edit: after posting this, I see the * (star meaning multiply) looks a little like a ^ (meaning to a power of). Just to be clear, the star (*) in javascipt means multiply.
 
  • #48
edguy99 said:
Not sure where you got the square, but the code in the program and as above has no square in it. var probofhit = (Math.cos((rphotonangle - raliceangle)*2)+1)/2;

An example may help clarify. 0° difference implies a 1.0 probability, 45° difference implies a 0.5 probability, 90° difference implies a 0.0 probability.

WRT the mismatches in the E matrix, the first 3 animations uses the classical Bell style that has no mismatches. The last (4th) animation uses the formula above and models the quantum state. You will see mismatches on the 4th animation.

I am considering putting a button on each of the animations allowing you to choose if you want classical or quantum, maybe that would help clarify the quantum vs classical.

We are not going to discuss your computer program here, which is flawed to the extent you think it is representative of local realism. As simple as I can say it: publish it in a peer reviewed journal. We have been through hundreds of attempts to use this forum to push local realistic concepts, and they all get shut down. Either close this line off yourself, or you next reference to it will be reported to the moderators.
 
  • #49
DrChinese said:
We are not going to discuss your computer program here, which is flawed to the extent you think it is representative of local realism. As simple as I can say it: publish it in a peer reviewed journal. We have been through hundreds of attempts to use this forum to push local realistic concepts, and they all get shut down. Either close this line off yourself, or you next reference to it will be reported to the moderators.

Ouch!
 
  • #50
edguy99 said:
Ouch!

Edguy, it's a very nice program. I wish I had written it myself (I am a software developer). And it does have merits from an instructive point of view just as a visual aid - which is how I think it is intended.

But trying to push something that claims to match QM and is "local realistic" is going to be a violation of PhysicsForums guidelines regarding speculation and unpublished works (ie what is an acceptable source).

If you want to understand why your model does not match QM (and actual experiments as well), then PM me your email address and I will explain offline. Just because the simulation works does not mean it passes all the tests.
 
  • #51
Ken G said:
Another approach is to simply say that the measurements done on the system are part of what determines the probability distributions of those measurements. This is, after all, just what we see with entangled systems, it was only ever us that said these systems had to comprise of independent probability distributions. We know they do not, so why force "influences" down their throat, when we can just say what we see to be true: the probabilities depend on the measurements we choose because part of what we mean by the system is how the outcomes of chosen measurements interact with the preparation. That's really all physics ever concerns itself with, the idea that you need influences to alter the independent probabilities is merely a holdover from other contexts where that actually works.

First of all, entanglement is never about perfect correlations, because you don't get Bell violations with perfect correlations. But a more important issue is, I never said the theory doesn't violate locality, of course it does that. I said it doesn't need to violate it by application of a concept of "FTL or retrocausal influences." One simply says that to talk meaningfully about a system, we need more than its preparation, we need to know what attributes of that system are being established by measurement, and the probabilities it will exhibit depend on the way the measurements establish those attributes. Again, outcomes are an interaction between preparation and measurement choices, no influences needed, FTL or otherwise. I'd say that same thing even if the speed of light was infinite-- there's no magical "influence" there either way, its causality violations are just a good clue we made a bad choice of language.
Well yes, there is another explanation that works. It's called superdeterminism. Unfortunately it's non-scientific. If that is what you mean (as it seems to me) then it's no real alternative to FTL.

Btw As you refer to FTL and retrocausal influences together do you consider them equally unacceptable?
 
  • #52
DrChinese said:
... claims to match QM and is "local realistic" is going to be a violation of PhysicsForums guidelines regarding speculation and unpublished works (ie what is an acceptable source)...

I'm sorry, I will try to be more careful. I actually had not even checked if it did match QM till you mentioned it. I was pleasantly surprised to hear you say it did.

I have been concentrated on the process of constructing the matrices A, B, E, and F. The only reason to run the animation is to generate the matrices, but it is nice to highlight the time dependent things that are happening, ie 1/ random photon shot, 2/ Bob/Alice detectors randomly set, 3/ Detection recorded in the matrices. Repeat until done.

I found the E matrix interesting as I had not realized the importance of the two 0's in the E matrix in the Bell test until I did the animation.

Also, I have converted the animation to do only "statistical" photons (ie. it was stage 1,2,3 was classical, 4 was statistical. Now only 3 stages all statistical). Now you can see some numbers showing up in matrix E when doing the single shot animation.

BTW, It would be great if you were interested to try the functions in post #44 and generate a few sets of matrices to have a look at :)
 
  • #53
zonde said:
Well yes, there is another explanation that works. It's called superdeterminism.
No, what I just described is not superdeterminism. What I just described is pure science. Let me repeat it to prove that.

In scientific thinking, any "system" can be regarded as a kind of machine for connecting a preparation with a set of outcomes for any set of measurements, where what I mean by "outcomes" can include probability distributions and not just definite predictions. That's it, that's what I said. There is no scientific requirement whatsoever that the "system" be able to be analyzed in terms of parts, it just happens that this is often possible. There is no scientific requirement whatsoever that the probability distributions be independent of each other, in the sense that the preparation must determine what those distributions are, independent of the actual set of measurements that are used-- it just so happens that this is often possible. And finally, there is no scientific requirement whatsoever for us to say that whenever we find correlations between measured outcomes that depend on what measurements are done, there has to be "influences" that propagate across the system, though again, there are situations where that is a useful way to think of what is happening (generally situations where the "influences" are slower than light and could in principle be intercepted, i.e., when the influences are themselves part of the measurable system). In all cases, what is happening here is simply that models that have worked for us in various contexts, models like "parts" and "independence" and "influences", are simply being mistaken for scientific requirements. But science is not defined by the model, it is defined by the method.
Btw As you refer to FTL and retrocausal influences together do you consider them equally unacceptable?
Yes, I see no particularly important distinction between crossing one light-cone boundary versus crossing two of them. Either way, it's not the way the time-honored model we call an "influence" works. My main objection to that language is it demonstrably forces people to conclude that a behavior they are sure will happen is "weird." That's the signpost of a failing picture. What is weird is supposed to be what we do not expect, rather than what we do expect.
 
Last edited:
  • #54
Ken G said:
Let me repeat it to prove that.

I really can't understand you Ken G. Could you address my question directly?

I don't even know if I understood your premises here but this is what it seems to me that you're saying: the resulting probability distribution depends on the holistic system (which includes the measurement instruments). I've made a picture:

uhDH7Ub.jpg


It doesn't matter if the polarizers are part of the holistic system: sure, rotating one changes the outcome for probability distribution, but also at the other side. A choice here impacts there, because the system is extended in space. You can say the whole system changes nonlocally, so to avoid using the term FTL. But it's only a nominal difference. It may be verbally nicer.

Am I missing something?
 
  • #55
Ken G said:
There is no scientific requirement whatsoever that the "system" be able to be analyzed in terms of parts, it just happens that this is often possible.
If we can make observations in terms of parts then of course we can analyze these observations.

Ken G said:
There is no scientific requirement whatsoever that the probability distributions be independent of each other, in the sense that the preparation must determine what those distributions are, independent of the actual set of measurements that are used-- it just so happens that this is often possible.
Actually no superdeterminism assumptions requires that preparation is independent from reasonably randomized measurement settings.

Ken G said:
And finally, there is no scientific requirement whatsoever for us to say that whenever we find correlations between measured outcomes that depend on what measurements are done, there has to be "influences" that propagate across the system, though again, there are situations where that is a useful way to think of what is happening (generally situations where the "influences" are slower than light and could in principle be intercepted, i.e., when the influences are themselves part of the measurable system).
There is such requirement. That requirement is called falsifiability. That means there should be valid observation that could indicate FTL "influence" and falsify "no FTL" hypothesis even if we can't create that FTL phenomena artificially.

Ken G said:
In all cases, what is happening here is simply that models that have worked for us in various contexts, models like "parts" and "independence" and "influences", are simply being mistaken for scientific requirements. But science is not defined by the model, it is defined by the method.
Science is as well defined by requirements that scientific models have to fulfill.

Ken G said:
Yes, I see no particularly important distinction between crossing one light-cone boundary versus crossing two of them. Either way, it's not the way the time-honored model we call an "influence" works. My main objection to that language is it demonstrably forces people to conclude that a behavior they are sure will happen is "weird." That's the signpost of a failing picture. What is weird is supposed to be what we do not expect, rather than what we do expect.
Are your objections about wording or what?

There are two theories and what we expect from one theory contradicts what we consider possible in the other theory. If you say this signpost of a failing picture ... well I agree.
 
  • #56
ddd123 said:
You can say the whole system changes nonlocally, so to avoid using the term FTL. But it's only a nominal difference. It may be verbally nicer.

Am I missing something?
Yes. The concept of "FTL" involves the concept "faster." That involves the concept of speed, which involves the concept of movement or propagation. Also, when people use the term FTL, as in this thread, they generally couple it to the term "influence." So this is what I am saying has no need to be present in entanglement: the concept of a propagating influence. That's why I said it's not the "FTL" itself I object to, if all you mean by that is once you've embraced realism, you are stuck with nonlocality-- that is what we know to be true. But we don't have to leave it at that, if we still have something that seems "weird"-- we have something else too. We have the repudiation of the idea that a system is made of parts that connect via "propagating influences". That latter notion is simply a view that often works for us, so we mistake it for some kind of universal truth, or worse, a universal requirement for scientific investigation.

So when you say the system "changes nonlocally" when a measurement choice is made, even that implies a hidden assumption. You see that the measurement is applied at one place and time, and the system changes everywhere. But that isn't supported by the data either. The most general meaning of "a system" is a mapping between a preparation and a set of measurement probability distributions-- after all, that's all that is involved in empirical science. Notice that if you define the "system" that way, it never "changes" nonlocally! To get rid of "nonlocality", one merely needs to define a system the way I just did, and poof, it's gone. The nonlocality comes from requiring that the system must exist independently of the measurements that establish its attributes, that's why it is actually "nonlocal realism," i.e., the combination of locality and realism, that is ruled out-- not either one individually. Personally, I find it much more scientific to define a system the way I just did, but this does not get called "realism" because the measurements are included in the meaning of the system.
 
  • #57
I don't understand the definition. One thing is not slicing the system into parts in the sense that they're not taken to be independent; another is ignoring spatiotemporal extension and events. Are you giving up the notion of spacetime events? What do you substitute to that?
 
  • #58
zonde said:
If we can make observations in terms of parts then of course we can analyze these observations.
What do you mean by "observations in terms of parts?" What we have are observations at various places and times, that's it-- that's not parts. The "parts" are your personal philosophy here, nothing more. Now, it's certainly not a rare philosophy, and the "parts" model certainly works very well for us in almost all situations. But not all, that's the point. Hence, we should question that personal philosophy about observations of systems. It is the observations that have a place and time, not the "parts of the system." That's why I brought up white dwarf stars, they make this point really clear, when you have something the size of a planet that cannot be analyzed in terms of "observations on parts" because the "parts" are indistinguishable from each other.

Actually no superdeterminism assumptions requires that preparation is independent from reasonably randomized measurement settings.
I haven't said anything about superdeterminism other than that I am not talking about superdeterminism, nor have you shown that what I am saying is tantamount to superdeterminism.
There is such requirement. That requirement is called falsifiability.
Then you just vioalated falsifiability with your first sentence above. What is falsifiable about the claim that observations at a place and time are "observations in terms of parts", if entanglement is not a falsification of that claim, and indistinguishability in white dwarfs is not a falsification of that claim? Anyone who thinks observations are "observations in terms of parts" (which I admit, is pretty much everyone before they learn quantum mechanics) is going to find both entanglement and fermionic indistinguishability "weird." But weirdness is not falsification, it is more like repudiation.

That means there should be valid observation that could indicate FTL "influence" and falsify "no FTL" hypothesis even if we can't create that FTL phenomena artificially.
Are you saying there is such a valid observation? If not, it means we are deciding between personal philosophies, and I've never claimed otherwise.
There are two theories and what we expect from one theory contradicts what we consider possible in the other theory. If you say this signpost of a failing picture ... well I agree.
I don't see what two contradicting theories you have in mind here. If anyone had been talking about slower-than-light influences that can be intercepted and studied as part of the system, then we'd be talking about a theory that makes different predictions than quantum mechanics, but no one is talking about a theory like that.
 
  • #59
To me the white dwarf example is absolutely the same, so I kept using the EPR one. You remove an atom from one side, suddenly, placing the atom on the other side becomes possible. It's exactly the same as in my picture above, with an atom placer-remover instead of the polarizer.
 
  • #60
ddd123 said:
I don't understand the definition. One thing is not slicing the system into parts in the sense that they're not taken to be independent; another is ignoring spatiotemporal extension and events. Are you giving up the notion of spacetime events?
I have in no way given up the notion of spacetime events, I have merely noticed what is in evidence: the spacetime events are the observations, not the parts of the system. The parts of the system needs to mean nothing more than what the system is composed of, which is all part of the preparation. If you do an observation at a given place and time, it is certainly important that the observation occurred at a given place and time, and you will never need to use any other concept of "parts." Once the measurement is made, you can regard that as a new preparation, a new system, and a new mapping from that total preparation to new outcomes. Those new probabilities can change instantly because your information has changed instantly, that's "Bertlmann's socks." Nonlocality only appears when you combine with realism, i.e., combine with the picture that there exists a unique probability distribution that goes with each measurement independently of all other measurements outside the past light cone of that one. Local realism says you might not know what that unique distribution is, and any new information you get that constrains the preparation could cause you to reassess that unique probability, but you hold that it still exists unchanged. But if you don't think of a system as a list of unique probabilities, but rather a machine for connecting a preparation with a set of probabilities associated with any set of measurements you could in principle do, then you have no problems with nonlocality.
 

Similar threads

  • · Replies 24 ·
Replies
24
Views
3K
  • · Replies 58 ·
2
Replies
58
Views
5K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 40 ·
2
Replies
40
Views
5K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 225 ·
8
Replies
225
Views
14K
  • · Replies 15 ·
Replies
15
Views
2K
  • · Replies 22 ·
Replies
22
Views
5K
  • · Replies 8 ·
Replies
8
Views
850