Why is superdeterminism not the universally accepted explanation of nonlocality?

In summary, the conversation discusses the concept of nonlocality and entanglement in a deterministic universe, where the information about instantaneous transfer is known to the universe. The conversation also touches upon the idea of superdeterminism, which some people reject due to its conspiratorial nature and lack of a concrete scientific theory. The possibility of interpreting nonlocality as an answer rather than a problem is also mentioned, as well as the importance of keeping beliefs aligned with measured reality. The conversation concludes with the suggestion that it may be better to believe in the existence of random and non-local phenomena rather than inventing longer explanations.
  • #316
lugita15 said:
ThomasT, regardless of whether you think my idealized setup is a good representation of Bell tests, just answer me this: for this setup, in which there are only individual detection results, do you or do you not believe that it is possible for a local deterministic theory to be compatible with all the predictions of QM? If your answer is yes, my followup would be: which of the seven points quoted in post #307 do you disagree with and why? (And when reading that post, please keep in mind that when I say the word correlation I mean correlation between individual detections, not correlation coefficient of the rate of coincidental detection and theta.)
Your points involve mismatches (1,0 or 0,1 -- ie., paired or coincidental detection attributes) at relative angles (ie., Theta).

The problem in constructing an LR model of entanglement is that it has to encode some sort of locality condition. This is done by assuming that events (polarizer settings and individual data sequences) at A and B are independent of each other. This is manifested in your points by calculating the expected mismatches at some Theta as being no more than twice the mismatches at 1/2 Theta. It starts with point 5., where you separate the probability at Theta = 60 degrees into the probabilities at the Theta = 30 degree offsets. This is the your locality, or, more precisely, independence assumption.

Does this mean that nature is nonlocal? I don't think so. It's just that the results of Bell tests can't be understood in terms of independent events at A and B. The measurement and (assumed) underlying parameters are irreducible.

But how can one begin to understand the experimental results in a local deterministic way? Simply put, the polarizers in the joint context are measuring an underlying parameter (unlike the underlying parameter that determines individual detection and varies randomly from pair to pair) that isn't varying from pair to pair. They're measuring a relationship between photons of a pair. So, an independence assumption doesn't fit the experimental situation (even though it's a necessary constraint on standard LR models of entanglement). But the assumption that the relationship between photons of a pair is produced locally does fit the experimental situation (eg., see the emission model associated with Aspect et al. 1982). And then of course there's the experimentally documented behavior of light in polariscopic setups.

It all, reasonably I think, points to local determinism, as far as I can tell. So, no need for superdeterminism.
 
Physics news on Phys.org
  • #317
ThomasT said:
Your points involve mismatches (1,0 or 0,1 -- ie., paired or coincidental detection attributes) at relative angles (ie., Theta).
That's correct, but I hope you acknowledge that in the case the mismatches are just mismatches of individual detection results, and hence the only thing that can possibly explain the mismatches are whatever parameters or hidden variables explain the individual detection results.
The problem in constructing an LR model of entanglement is that it has to encode some sort of locality condition. This is done by assuming that events (polarizer settings and individual data sequences) at A and B are independent of each other. This is manifested in your points by calculating the expected mismatches at some Theta as being no more than twice the mismatches at 1/2 Theta. It starts with point 5., where you separate the probability at Theta = 60 degrees into the probabilities at the Theta = 30 degree offsets. This is the your locality, or, more precisely, independence assumption.
OK, let me tell you my reasoning for going from step 4 to 5, and you tell me where I am making an "independence assumption". (For the record, I think my only locality assumption was in step 3, not in step 5). For each angle pair (θ1,θ2), we send a billion entangled pairs (remember, one pair every hour... that's why it's called "in principle") and by comparing the individual results of the two experimenters, we calculate the percentage of pairs that had a mismatch, called R(θ1,θ2). Note that whatever determines the individual detection results must also determine when there is and is not a mismatch, and thus determines R(θ1,θ2). After finding the function R, we find that it has the property that R((θ1+C,θ2+C)=R(θ1,θ2) for all C, so in particular R(θ1,θ2)=R(θ1-θ2,0)=R(θ,0), so we conclude that the difference between the two angles, not the individual angles, is what is most important, so we can just write R(θ). In particular, R(30)=R(0,-30)=R(30,0), R(60)=R(60,0)=R(30,-30), and R(0)=R(0,0)=0.
Does this mean that nature is nonlocal? I don't think so. It's just that the results of Bell tests can't be understood in terms of independent events at A and B. The measurement and (assumed) underlying parameters are irreducible.
What do you mean by parameters being irreducible?
But how can one begin to understand the experimental results in a local deterministic way? Simply put, the polarizers in the joint context are measuring an underlying parameter (unlike the underlying parameter that determines individual detection and varies randomly from pair to pair) that isn't varying from pair to pair.
I really don't understand you. Don't you agree that in my idealized setup, all data and calculations done come from the recording of individual detection results, and thus the parameter that determines ALL experimental findings is whatever determines whether a given photon goes through or not? I thought you agreed with this before, which is why you thought my setup didn't capture the features crucial for Bell tests.

Do you still stand by your comment below:
ThomasT said:
Your idealized setup only has to do with individual detecions. Bell showed that LR models of individual detections are compatible with QM.
 
Last edited:
  • #318
lugita15 said:
... the only thing that can possibly explain the mismatches are whatever parameters or hidden variables explain the individual detection results.
That can't possibly be the case, as far as I can tell. The rate of mismatches, ie., the rate of coincidental detection, varies predictably as a function of Theta. But the rate of individual detection doesn't vary, no matter how the polarizers are oriented.

So how can the same underlying parameter be determining both coincidental detection and individual detection?

lugita15 said:
... tell me where I am making an "independence assumption".
Step 5. In this step you've analyzed the probability for a certain Theta into 2(Theta/2). The problem is, light doesn't behave that way.

lugita15 said:
What do you mean by parameters being irreducible?
It means that coincidental detection is being correlated with Theta, and you can't analyze it any further and get a model that agrees with the experimental results. In the joint context, Theta is the independent variable; the relationship between entangled photons is an assumed underlying constant; and the rate of coincidental detection is the the dependent variable.

ThomasT said:
... the polarizers in the joint context are measuring an underlying parameter (unlike the underlying parameter that determines individual detection and varies randomly from pair to pair) that isn't varying from pair to pair.

lugita15 said:
I really don't understand you. Don't you agree that in my idealized setup, all data and calculations done come from the recording of individual detection results, and thus the parameter that determines ALL experimental findings is whatever determines whether a given photon goes through or not?
You're missing an important part of Bell tests. The matching of the data streams. The pairing of detections at A and B. Without this there's no entanglement.

The key to understanding why the assumption of local determinism isn't incompatible with QM is that the underlying parameter that's being measured by Theta and that's determining coincidental detection isn't varying from pair to pair. Why/how can this be inferred? Because the rate of coincidental detection varies ... with Theta.

But the rate of individual detection doesn't vary ... no matter how an individual polarizer is oriented.

The key to understanding why LR theories of quantum entanglement are ruled out is that they have to encode some sort of independence assumption. But the problem is that this contradicts the experimental situation, which evidently produces a relationship between the entangled entities via local transmissions/interactions. And because this relationship is being measured by a global instrumental variable the appropriately matched data streams at A and B aren't independent of each other.

Changing the setting of the polarizer at A (or B) instantaneously changes Theta. Recording a qualitative result at A (or B) instantaneously changes the sample space at B (or A).

Quantum nonseparability refers to the irreducibility of the relationship between entangled entities, as well as the measurement parameters, and also the data associated with that relationship.
 
  • #319
yoron said:
Free will?

Hey, nothing is free. My bank taught me that.

:smile:
 
  • #320
ThomasT said:
That can't possibly be the case, as far as I can tell. The rate of mismatches, ie., the rate of coincidental detection, varies predictably as a function of Theta. But the rate of individual detection doesn't vary, no matter how the polarizers are oriented.

So how can the same underlying parameter be determining both coincidental detection and individual detection?
I'm having a hard time understanding this. Do you or do you not agree in my idealized setup, the mismatches are just mismatches of individual detection results. And that if the individual detection results are the same, then the mismatches are the same? And thus that the mismatches are completely determined by the individual detection results?

If you have a bunch of data in an excel spreadsheet, then the value of any function calculated from this data is entirely determined by the data. I don't know how you can reasonably disagree with this.
Step 5. In this step you've analyzed the probability for a certain Theta into 2(Theta/2). The problem is, light doesn't behave that way.
You have the uncanny ability of focusing on steps I consider trivial. To my mind, step 5 is a completely obvious consequence of step 4. I am just applying the definition of R, which is that the probability that P(θ1)≠P(θ2) is equal to R(θ1-θ2). How can you disagree with that definition?
You're missing an important part of Bell tests. The matching of the data streams. The pairing of detections at A and B. Without this there's no entanglement.
But the result of any analysis, matching, or pairing of the data is surely determined BY the data, is it not? And thus the parameters or hidden variables that determine the data must determine anything that is derived from the data, right?
 
  • #321
bohm2 said:
I don't think anybody can explain why position (Q) assume pre-existence in BM while everything else is contextual: spin, energy, and other non-position “observables”.
Are you saying that position is non-contextual in Bohmian mechanics?
 
  • #322
lugita15 said:
Are you saying that position is non-contextual in Bohmian mechanics?
Yes, position is the only non-contextual observable. So position is the only variable that can be regarded as being possessed before measurement, in such a way that “faithful measurements” just reveal it. Having said that Bohm (at least in his metaphysics) didn't appear to believe in the "reality" of particles:
We have frequently been asked the question “Didn’t Bohm believe that there was an actual classical point-like particle following these quantum trajectories?" The answer is a definite No! For Bohm there was no solid 'particle' either, but instead, at the fundamental level, there was a basic process or activity which left a ‘track’ in, say, a bubble chamber. Thus the track could be explained by the enfolding and unfolding of an invariant form in the overall underlying process.
Zeno Paradox for Bohmian Trajectories: The Unfolding of the Metatron
http://www.freewebs.com/cvdegosson/ZenoPaper.pdf
 
  • #323
bohm2 said:
Yes, position is the only non-contextual observable. So position is the only variable that can be regarded as being possessed before measurement, in such a way that “faithful measurements” just reveal it.
Are there "unfaithful measurements" which do not simply reveal the pre-measurement value? If so, does that mean there is still some lingering contextuality even in position? In other words, is Bohmian mechanics not exploiting the full noncontextuality allowed by the Kochen-Specker theorem for the position observable?
 
  • #324
lugita15 said:
In other words, is Bohmian mechanics not exploiting the full noncontextuality allowed by the Kochen-Specker theorem for the position observable?
There is no Kochen-Specker theorem for the position observable, nor for any SINGLE observable. The Kochen-Specker theorem is a theorem about a SET of mutually NON-COMMUTING observables. If the set contains only one observable, then all observables in this set commute with each other (because [A,A]=0), so the Kochen-Specker theorem does not refer to this set.
 
  • #326
Couple of basic questions about BM:
1. What makes position "observable" special in BM? As I understand, the actual Qk are never observed directly, instead atoms of the apparatus interact with the system (and the rest of the universe) in mysterious ways through the guiding equation. But what makes measuring position in this way any different from measuring any other observable?

2. Can guiding equation be reformulated in momentum basis?
 
  • #327
Delta Kilo,
1. See my post #325 above.
2. No.
 
  • #328
Demystifier said:
There is no Kochen-Specker theorem for the position observable, nor for any SINGLE observable. The Kochen-Specker theorem is a theorem about a SET of mutually NON-COMMUTING observables. If the set contains only one observable, then all observables in this set commute with each other (because [A,A]=0), so the Kochen-Specker theorem does not refer to this set.
That was my point. Since the Kochen-Specker theorem does not apply to position, position can be completely non-contextual. But bohm2 seemed to imply that only "faithful" measurements reveal the pre-measurement position. So in Bohmian mechanics is position contextual for "unfaithful" measurements?
 
  • #329
lugita15 said:
That was my point. Since the Kochen-Specker theorem does not apply to position, position can be completely non-contextual. But bohm2 seemed to imply that only "faithful" measurements reveal the pre-measurement position. So in Bohmian mechanics is position contextual for "unfaithful" measurements?
Yes, in BM there are also "unfaithful" measurements of positions, so BM can be said to be more contextual than necessary due to the KC theorem. The best known example of "unfaithful" measurements in BM are the so-called surreal trajectories.
 
  • #330
lugita15 said:
I'm having a hard time understanding this.
I'm just wondering how the rate of individual detection and the rate of coincidental detection can be attributed to the same underlying parameter.

lugita15 said:
Do you or do you not agree in my idealized setup, the mismatches are just mismatches of individual detection results.
Yes, I agree. The language surrounding all this can get confusing. But I know what you're saying.

lugita15 said:
And that if the individual detection results are the same, then the mismatches are the same?
I'm not sure what you mean by this.

lugita15 said:
If you have a bunch of data in an excel spreadsheet, then the value of any function calculated from this data is entirely determined by the data. I don't know how you can reasonably disagree with this.
I don't disagree with it. But the individual detection sequences, considered separately, are different data than the sequences, appropriately combined, considered together. The two different data sets are correlated with different measurement parameters. The setting of polarizer a or b is not the same observational context as the angular difference between a and b.

lugita15 said:
You have the uncanny ability of focusing on steps I consider trivial. To my mind, step 5 is a completely obvious consequence of step 4. I am just applying the definition of R, which is that the probability that P(θ1)≠P(θ2) is equal to R(θ1-θ2). How can you disagree with that definition?
Your notation is a bit confusing for me. Say in words what you mean by the above notations.

lugita15 said:
But the result of any analysis, matching, or pairing of the data is surely determined BY the data, is it not?
Ultimately, yes. But the organization of the data, how it's parsed or matched, and what it's correlated with is determined by the experimental design. Individual data sequences composed of 0's and 1's aren't the same as combined data sequences composed of (1,1)'s, (0,0)'s, (1,0)'s, and (0,1)'s.

lugita15 said:
And thus the parameters or hidden variables that determine the data must determine anything that is derived from the data, right?
The individual data sequences, considered separately, are correlated with the settings of the individual polarizers, considered separately.

The combined data sequences are correlated with the angular difference between the individual polarizer settings.

In most (or at least many) LR accounts, the underlying parameter determining individual detection is assumed to be the polarization vector of polarizer-incident photons.

From the assumption of common cause, and the results when polarizers are aligned, it's assumed that this polarization vector is the same for both polarizer-incident photons of an entangled pair.

But here's the problem, the rate of coincidental detection varies only as θ, the angular difference between a and b, varies. (That is, wrt any particular θ, the common underlying polarization vector can be anything, and the rate of coincidental detection will remain the same. But if θ is changed, then the rate of coincidental detection changes as cos2θ, which, afaik, and not unimportantly, is how light would be expected to behave.)

So, it seems to me, θ must be measuring something other than the polarization vector of the polarizer-incident photons.

And it has to be something that, unlike the underlying polarization vector, isn't varying randomly from entangled pair to entangled pair.

So, I reason, θ is measuring a relationship between photons of an entangled pair -- a relationship which, wrt any particular Bell test, doesn't vary from pair to pair, and which Bell tests are designed to produce ... locally.
 
  • #331
ThomasT said:
I'm just wondering how the rate of individual detection and the rate of coincidental detection can be attributed to the same underlying parameter.
To me the answer is clear: both rates are calculated from the individual detection results, so the only relevant parameters are those that determine the individual detection results.
Yes, I agree. The language surrounding all this can get confusing. But I know what you're saying.
Yes, I also feel that much of our disagreement may be due to semantics.
lugita15 said:
And that if the individual detection results are the same, then the mismatches are the same?
I'm not sure what you mean by this.
I mean, suppose we have sent an entangled pair of photons through the polarizers, and e.g. we may get 1 on the first detector and 0 on the second detector. Then given these individual detection results, the answer to the question "Was there a mismatch?" is completely determined. So mismatches cannot be depend on any parameter that the individual detection results don't already depend on.
I don't disagree with it. But the individual detection sequences, considered separately, are different data than the sequences, appropriately combined, considered together. The two different data sets are correlated with different measurement parameters.
I think this is more semantics. To my mind, the data sets are just composed of the individual data entries, i.e. the individual detection results. So how can the data set as a whole be determined by anything other than what determines each individual entry?
The setting of polarizer a or b is not the same observational context as the angular difference between a and b.
I don't get your point here. To me, it seems so obvious that teh angular difference is nothing more and nothing less than the difference of the settings of the settings of the two polarizers, so there's nothing special about it.
Your notation is a bit confusing for me. Say in words what you mean by the above notations.
I don't know whether I can, but I can try to explain my notation and then you can ask me what you don't get. Starting from the top, QM predicts that for an entangled pair of photons, you always get identical detection results at identical polarizer settings. From this, the local determinist concludes that both photons are using the same function P(θ) to determine whether to go through the polarizer or not. P has only two values it can have, 0 and 1. If one of the photons encounters a polarizer oriented at a given angle, it plugs the angle into the function P and gets either a 0 or 1 as the answer. If 0, then it doesn't go through the polarizer, and if 1 then it does. Are you clear up to there?

So now the following experiment is done. Polarizer 1 is turned to the angle θ1, Polarizer 2 is turned to θ2, and then we send a trillion entangled pairs of photons to the two polarizers. Each experimeter writes down a list of yes or no answers as to whether each photon goes through the polarizer or not. Then we calculate R(θ1,θ2), which is the percentage of pairs whose individual detection results had a mismatch. Another way of putting this is that R(θ1,θ2) is the observed probability that a randomly selected entangled pair will have a mismatch between individual detection results. Are you clear on that?

Now remember, the individual detection results for a given pair are determined by the function P. So if the pair has a mismatch when one polarizer is oriented at θ1 and one polarizer is oriented at θ2, what that means is that P(θ1)≠P(θ2), meaning the P function for that pair is telling you to do different things at the angle θ1 versus the angle θ2. Now remember, R(θ1,θ2) is the probability that a randomly selected pair will have a mismatch when the polarizers are set at θ1 and θ2. In other words, R(θ1,θ2) is the probability that a randomly selected pair will have a P function which gives contradictory messages at θ1 and θ2, or to put it more simply R(θ1,θ2) is the probability that a randomly selected pair has a P function such that P(θ1)≠P(θ2). Are you clear on that? If you are, then step 5 follows pretty directly. (You see me frequently writing R(θ) instead of R(θ1,θ2), because R(θ1+C,θ2+C)=R(θ1,θ2) for all C, so in particular R(θ1-θ2,0)=R(θ1,θ2), so we can write R(θ1,θ2)=R(θ,0)=R(θ), where θ=θ1-θ2; I hope that's not too confusing.)
Ultimately, yes. But the organization of the data, how it's parsed or matched, and what it's correlated with is determined by the experimental design. Individual data sequences composed of 0's and 1's aren't the same as combined data sequences composed of (1,1)'s, (0,0)'s, (1,0)'s, and (0,1)'s.
But all these data sequences are composed of the individual detection results, so the only relevant parameter are whatever determines these results. I'm sorry for repeating myself, but I feel like we're communicating on different wavelengths.
 
  • #332
Demystifier said:
Yes, in BM there are also "unfaithful" measurements of positions, so BM can be said to be more contextual than necessary due to the KC theorem.
OK, that's what I was trying to get at. So are there more variants or alternatives of Bohmian mechanics which make position even less contextual?
The best known example of "unfaithful" measurements in BM are the so-called surreal trajectories.
Are these the trajectories where the particle goes one way through a double slit experiment according to Bohmian mechanics, but for some reason detectors at the slits tell a different story? How do Bohmians explain that? (I'm sorry if this is another really trivial question.)
 
  • #333
ThomasT said:
I'm just wondering how the rate of individual detection and the rate of coincidental detection can be attributed to the same underlying parameter.

ThomasT, I'm struggling to understand your reasoning here, so let me ask a simple question.
If photon A encounters polarizer A which parameter does it use to determine whether or not it passes, individual or coincidental?
 
  • #334
lugita15 said:
To me the answer is clear: both rates are calculated from the individual detection results, so the only relevant parameters are those that determine the individual detection results.
But both rates are not calculated from individual detection results.

lugita15 said:
I mean, suppose we have sent an entangled pair of photons through the polarizers, and e.g. we may get 1 on the first detector and 0 on the second detector. Then given these individual detection results, the answer to the question "Was there a mismatch?" is completely determined.
Completely determined by what?

lugita15 said:
... mismatches cannot be depend on any parameter that the individual detection results don't already depend on.
But then you're ignoring the obvious inferences from the experimental results.

lugita15 said:
To my mind, the data sets are just composed of the individual data entries, i.e. the individual detection results.
Sure. And human behavior is composed of the the behavior of individual atoms that comprise human beings. But you don't seem to realize that these are different observational contexts.

Do you think that you can explain human behavior from the atomic scale?

lugita15 said:
So how can the data set as a whole be determined by anything other than what determines each individual entry?
By "data set as a whole" I suppose that you're referring to coincidental detections.

The answer to your question is that "the data set as a whole" doesn't vary as a function of underlying polarization orientation. But individual detection, presumably, does. So, how would you explain this?

lugita15 said:
I don't get your point here. To me, it seems so obvious that teh angular difference is nothing more and nothing less than the difference of the settings of the settings of the two polarizers, so there's nothing special about it.
Yes, it's the angular difference of the settings of the two polarizers. That's what makes it a different measurement parameter than the settings of the polarizers considered separately by themselves.

I'll get back to you.
 
  • #335
ThomasT said:
But both rates are not calculated from individual detection results.
Yes, they are. At least in my idealized setup, everything is determined by putting the individual detection results (yes or no answers) in a spreadsheet, and then applying functions on the spreadsheet data.
Sure. And human behavior is composed of the the behavior of individual atoms that comprise human beings. But you don't seem to realize that these are different observational contexts.

Do you think that you can explain human behavior from the atomic scale?
Certainly, if human behavior is composed of the behavior of individual atom, then in principle you can definitely explain all human behavior from the atomic scale. Practically of course it might be insurmountably difficult, but we are talking about whether local determinism in principle contradicts the predictions of QM, not whether currently practical Bell tests are sufficient to definitively disprove local determinism (they're not).
 
  • #336
lugita15 said:
So are there more variants or alternatives of Bohmian mechanics which make position even less contextual?
If there are, I am not aware of it.

lugita15 said:
Are these the trajectories where the particle goes one way through a double slit experiment according to Bohmian mechanics, but for some reason detectors at the slits tell a different story? How do Bohmians explain that? (I'm sorry if this is another really trivial question.)
It's not trivial at all, so I would not like to discuss it in detail. For the details, see e.g. Sec. 4.1 of
http://xxx.lanl.gov/abs/quant-ph/0412119
Let me only say that Bohmians explain it by pointing out that Bohmian trajectories do not coincide with trajectories which one would naively expect from classical physics.
 
  • #337
ThomasT said:
But both rates are not calculated from individual detection results.

I haven't read the post above, so maybe I am misunderstanding what you are are saying.
However, coincidence measurements are usually done -in theory AND often in practice- by postprocessing of data from two individual detectors. All you need is two "streams" of time-stamped data.
 
  • #338
f95toli said:
I haven't read the post above, so maybe I am misunderstanding what you are are saying.
However, coincidence measurements are usually done -in theory AND often in practice- by postprocessing of data from two individual detectors. All you need is two "streams" of time-stamped data.
Yes, and combining the individual data sets results in a different data set, coincidental measurements, which is correlated with a different measurement parameter, angular difference.

For your convenience, here's my reasoning (from post #330):

The individual data sequences, considered separately, are correlated with the settings of the individual polarizers, considered separately.

The combined data sequences are correlated with the angular difference between the individual polarizer settings.

In most (or at least many) LR accounts, the underlying parameter determining individual detection is assumed to be the polarization vector of polarizer-incident photons.

From the assumption of common cause, and the results when polarizers are aligned, it's assumed that this polarization vector is the same for both polarizer-incident photons of an entangled pair.

But here's the problem, the rate of coincidental detection varies only as θ, the angular difference between a and b, varies. (That is, wrt any particular θ, the common underlying polarization vector can be anything, and the rate of coincidental detection will remain the same. But if θ is changed, then the rate of coincidental detection changes as cos2θ, which, afaik, and not unimportantly, is how light would be expected to behave.)

So, it seems to me, θ must be measuring something other than the polarization vector of the polarizer-incident photons.

And it has to be something that, unlike the underlying polarization vector, isn't varying randomly from entangled pair to entangled pair.

So, I reason, θ is measuring a relationship between photons of an entangled pair -- a relationship which, wrt any particular Bell test, doesn't vary from pair to pair, and which Bell tests are designed to produce ... locally.
 
Last edited:
  • #339
Joncon said:
ThomasT, I'm struggling to understand your reasoning here, so let me ask a simple question.
If photon A encounters polarizer A which parameter does it use to determine whether or not it passes, individual or coincidental?
Individual. See post #338 for a rehash of my reasoning.
 
  • #340
lugita15 said:
... we are talking about whether local determinism in principle contradicts the predictions of QM ...
The technical requirement, local realism, has been shown by Bell to contradict the predictions of QM. However, according to my current way of thinking about it (see post #338 for the line of reasoning), the assumptions of locality and determinism don't contradict the predictions of QM.

So, unless there's a flaw in my reasoning, then the assumption of superdeterminism isn't necessary.
 
  • #341
ThomasT, did you get through the rest of my post #331? Now do you understand my notations concerning P and R, and do you understand my reasoning for step 5 out of 7? If so, now which of my seven steps do you disagree with, for my idealized setup? Just for everyone's reference, here they are again.
lugita15 said:
1. Pretend you are a local determinist who believes that all the experimental predictions of quantum mechanics is correct.
2. One of these experimental predictions is that entangled photons are perfectly correlated when sent through polarizers oriented at the same angle.
3. From this you conclude that both photons are consulting the same function P(θ). If P(θ)=1, then the photon goes through the polarizer, and if it equals zero the photon does not go through.
4. Another experimental prediction of quantum mechanics is that if the polarizers are set at different angles, the mismatch (i.e. the lack of correlation) between the two photons is a function R(θ) of the relative angle between the polarizers.
5. From this you conclude that the probability that P(-30)≠P(0) is R(30), the probability that P(0)≠P(30) is R(30), and the probability that P(-30)≠P(30) is R(60).
6. It is a mathematical fact that if you have two events A and B, then the probability that at least one of these events occurs (in other words the probability that A or B occurs) is less than or equal to the probability that A occurs plus the probability that B occurs.
7. From this you conclude that the probability that P(-30)≠P(30) is less than or equal to the probability that that P(-30)≠P(0) plus the probability that P(0)≠P(30), or in other words R(60)≤R(30)+R(30)=2R(30).
 
  • #342
lugita15 said:
... which of my seven steps do you disagree with ... ?
I don't necessarily disagree with any of them. I just thought that Step 5. is where the independence is introduced.

For now, I'll ask a question about Step 3. :
lugita15 said:
3. From this you conclude that both photons are consulting the same function P(θ). If P(θ)=1, then the photon goes through the polarizer, and if it equals zero the photon does not go through.
Is your P(θ) the hidden variable, the underlying parameter, that Bell originally referred to as λ?
 
  • #343
ThomasT said:
I don't necessarily disagree with any of them.
Well, if you agree with all my steps then I've won, because the whole point of my argument is to show that a local determinist cannot agree with all the predictions of quantum mechanics. So if you disagree with my conclusion, you have to disagree with one of my steps.
I just thought that Step 5. is where the independence is introduced.
No, step 5 is a completely trivial step, as I think you now understand. The only locality assumption I see is in step 3.
For now, I'll ask a question about Step 3. :
Is your P(θ) the hidden variable, the underlying parameter, that Bell originally referred to as λ?
Yes, P(θ) is the local hidden variable that determines the individual detection results.
 
  • #344
lugita15 said:
Well, if you agree with all my steps then I've won, because the whole point of my argument is to show that a local determinist cannot agree with all the predictions of quantum mechanics.
I don't disagree with any of Bell's steps either. His program was to construct a model of entanglement than encoded a locality assumption. He proved that any such model was incompatible with QM. But he didn't prove that nature is nonlocal. He just proved that any model of entanglement that encodes an independence feature (which is how the assumption of locality is encoded) is incompatible with QM.

lugita15 said:
So if you disagree with my conclusion, you have to disagree with one of my steps.
I don't think so. To retain the assumptions of locality and determinism, I just have to show where, in your line of reasoning, the conclusion (which contradicts the known behavior of light) that there's a linear correlation between the angular difference between the polarizers and rate of coincidental detection becomes inevitable.

lugita15 said:
The only locality assumption I see is in step 3.
The first sentence in Step 3. isn't a locality assumption. It's a common cause assumption. This isn't what differentiates LR models from QM. QM assumes a common cause also, because that's what the experiments are designed to produce.

lugita15 said:
Yes, P(θ) is the local hidden variable that determines the individual detection results.
Ok.

P(θ) or λ is usually understood as the polarization vector of the optical disturbance incident on the polarizer. An LR model of rate of individual detection as determined by the polarizer orientation and the orientation of λ is compatible with QM.

But wrt rate of coincidental detection, this doesn't work. Wrt any particular value of θ, the orientation of λ can be anything, and the rate of coincidental detection will remain the same.

This can be visualized via a circle with two lines through the center representing the polarizer settings, and a third line through the center representing λ. No matter how λ is rotated, as long as θ remains the same, then the rate of coincidental detection doesn't vary.
So, λ is not determining the rate of coincidental detection.
 
  • #345
ThomasT said:
I don't disagree with any of Bell's steps either. His program was to construct a model of entanglement than encoded a locality assumption. He proved that any such model was incompatible with QM. But he didn't prove that nature is nonlocal. He just proved that any model of entanglement that encodes an independence feature (which is how the assumption of locality is encoded) is incompatible with QM.
OK, but regardless of what you think Bell's purpose was, I hope it's clear to you that my purpose is explicitly to show that you cannot be a local determinist and believe that all the predictions of QM are correct. The conclusion of my argument is that R(60)≤2R(30), which is in direct contradiction with the predictions of QM. So if you believe that local determinism IS compatible with the predictions of QM, then you disagree with my last step and thus you must disagree with one of the earlier steps. So which is it?
The first sentence in Step 3. isn't a locality assumption. It's a common cause assumption. This isn't what differentiates LR models from QM. QM assumes a common cause also, because that's what the experiments are designed to produce.
I hope we can stop talking about the formal models you call LR, because my goal isn't to show that some particular formal model is incompatible with QM, but rather that ANY local deterministic theory is incompatible with the predictions of QM.

But step 3 is definitely not something a believer in (an orthodox interpretation of) QM would accept. He wouldn't believe that individual detection results are predetermined by a commonly held function P(θ). Instead, he would think that the particle makes a random decision on the spot when it's measured, and then the wavefunction of the two-particle system collapses (nonlocally of course) so that the other particle will also do the same thing when put through a detector at the same setting.
Ok.

P(θ) or λ is usually understood as the polarization vector of the optical disturbance incident on the polarizer. An LR model of rate of individual detection as determined by the polarizer orientation and the orientation of λ is compatible with QM.

But wrt rate of coincidental detection, this doesn't work. Wrt any particular value of θ, the orientation of λ can be anything, and the rate of coincidental detection will remain the same.

This can be visualized via a circle with two lines through the center representing the polarizer settings, and a third line through the center representing λ. No matter how λ is rotated, as long as θ remains the same, then the rate of coincidental detection doesn't vary.
So, λ is not determining the rate of coincidental detection.
I've already tried to tell you that the percentage of mismatches is determined entirely by the individual detection results, but let's not rehash that; we may just be having some semantic differences on that point. Just tell me which of the seven steps you disagree with. Or if you prefer, which of the seven steps is such that not all local determinists would be forced to accept it?
 
  • #346
lugita15 said:
I've already tried to tell you that the percentage of mismatches is determined entirely by the individual detection results, but let's not rehash that; we may just be having some semantic differences on that point.
I don't think it's just a semantic difference. Do the visualization I suggested. It becomes quite clear that λ, the underlying polarization vector, isn't determining coincidental detection.

What you're not getting is that the relationship between entangled photons and the polarization of the pair are two different underlying parameters. It's the polarization that determines individual detection, and the relationship that determines coincidental detection.

lugita15 said:
... which of the seven steps is such that not all local determinists would be forced to accept it?
We can start with the second sentence in Step 3.
 
  • #347
ThomasT said:
I don't think it's just a semantic difference. Do the visualization I suggested. It becomes quite clear that λ, the underlying polarization vector, isn't determining coincidental detection.

What you're not getting is that the relationship between entangled photons and the polarization of the pair are two different underlying parameters. It's the polarization that determines individual detection, and the relationship that determines coincidental detection.
But in my idealized setup, coincidental detection comes entirely from the individual detection. I thought you acknowledged that here:
ThomasT said:
lugita15 said:
To my mind, the data sets are just composed of the individual data entries, i.e. the individual detection results.

Sure. And human behavior is composed of the the behavior of individual atoms that comprise human beings. But you don't seem to realize that these are different observational contexts.

Do you think that you can explain human behavior from the atomic scale?
And my answer was yes, if human behavior is composed of the behavior of the individual atoms then in principle you can completely explain human behavior from the atomic scale. So would you similarly acknowledge that if the coincidental detection results are composed of the individual detection results, as is the case for my idealized setup, then in principle the former can be completely explained in terms of the latter?
We can start with the second sentence in Step 3.
OK, so as a local determinist, what do you find objectionable in that sentence? "If P(θ)=1, then the photon goes through the polarizer, and if it equals zero the photon does not go through."
 
  • #348
lugita15 said:
But in my idealized setup, coincidental detection comes entirely from the individual detection.
Coincidental detection comes from the relationship between entangled photons. This isn't what's being measured in the individual context.

lugita15 said:
... if human behavior is composed of the behavior of the individual atoms ...
It isn't.

lugita15 said:
... then in principle you can completely explain human behavior from the atomic scale.
You can't, not even in principle.

lugita15 said:
So would you similarly acknowledge that if the coincidental detection results are composed of the individual detection results ...
I'm not arguing that. Obviously, coincidental results are composed of individual results.

lugita15 said:
... then in principle the former can be completely explained in terms of the latter?
No, because we're dealing with two different observational contexts wrt which there are two different underlying parameters.

lugita15 said:
OK, so as a local determinist, what do you find objectionable in that sentence? "If P(θ)=1, then the photon goes through the polarizer, and if it equals zero the photon does not go through."
P(θ) is supposed to be the underlying parameter determining individual detection -- which is usually understood as the underlying polarization orientation. So P(θ) would have values in degrees or radians.
 
  • #349
ThomasT said:
P(θ) is supposed to be the underlying parameter determining individual detection -- which is usually understood as the underlying polarization orientation. So P(θ) would have values in degrees or radians.
But P(θ) just tells the particle whether to go through the polarizer or not. So the only instruction it gives the particle is a yes or a no, or equivalently a 1 or a 0.
 
  • #350
lugita15 said:
But P(θ) just tells the particle whether to go through the polarizer or not. So the only instruction it gives the particle is a yes or a no, or equivalently a 1 or a 0.
Just to add to this, the function P(θ), since it is the hidden variable, can be determined by any number of things, including a polarization vector or anything else. But the input of the function must be the polarizer setting, and the output must be a yes-or-no instruction telling the particle to go through or not.
 
<h2>1. Why is superdeterminism not the universally accepted explanation of nonlocality?</h2><p>Superdeterminism is not the universally accepted explanation of nonlocality because it goes against the widely accepted principle of free will. Superdeterminism suggests that all events, including human decisions, are predetermined and therefore there is no true randomness or free will in the universe. This goes against our understanding of human agency and the ability to make choices.</p><h2>2. What evidence supports the rejection of superdeterminism as an explanation for nonlocality?</h2><p>One of the main pieces of evidence against superdeterminism is the violation of Bell's inequality, which suggests that there is a limit to how much information can be hidden from an observer. If superdeterminism were true, this limit would not exist and the observed correlations in nonlocal systems would not be possible.</p><h2>3. Are there alternative explanations for nonlocality other than superdeterminism?</h2><p>Yes, there are alternative explanations for nonlocality that do not rely on the concept of superdeterminism. Some theories suggest that there are hidden variables or hidden information that can explain the observed correlations in nonlocal systems without resorting to predetermined events.</p><h2>4. What implications would accepting superdeterminism have on our understanding of the universe?</h2><p>If superdeterminism were to be accepted as the explanation for nonlocality, it would have significant implications on our understanding of the universe. It would mean that all events, including our thoughts and actions, are predetermined and there is no true randomness or free will. This would challenge our understanding of causality and the role of human agency in shaping our reality.</p><h2>5. Is there ongoing research and debate surrounding the concept of superdeterminism and its relation to nonlocality?</h2><p>Yes, there is ongoing research and debate surrounding the concept of superdeterminism and its relation to nonlocality. Scientists continue to explore alternative explanations for nonlocality and gather evidence to support or refute the concept of superdeterminism. This is an active area of study in the field of quantum mechanics and there is no consensus yet on the ultimate explanation for nonlocality.</p>

1. Why is superdeterminism not the universally accepted explanation of nonlocality?

Superdeterminism is not the universally accepted explanation of nonlocality because it goes against the widely accepted principle of free will. Superdeterminism suggests that all events, including human decisions, are predetermined and therefore there is no true randomness or free will in the universe. This goes against our understanding of human agency and the ability to make choices.

2. What evidence supports the rejection of superdeterminism as an explanation for nonlocality?

One of the main pieces of evidence against superdeterminism is the violation of Bell's inequality, which suggests that there is a limit to how much information can be hidden from an observer. If superdeterminism were true, this limit would not exist and the observed correlations in nonlocal systems would not be possible.

3. Are there alternative explanations for nonlocality other than superdeterminism?

Yes, there are alternative explanations for nonlocality that do not rely on the concept of superdeterminism. Some theories suggest that there are hidden variables or hidden information that can explain the observed correlations in nonlocal systems without resorting to predetermined events.

4. What implications would accepting superdeterminism have on our understanding of the universe?

If superdeterminism were to be accepted as the explanation for nonlocality, it would have significant implications on our understanding of the universe. It would mean that all events, including our thoughts and actions, are predetermined and there is no true randomness or free will. This would challenge our understanding of causality and the role of human agency in shaping our reality.

5. Is there ongoing research and debate surrounding the concept of superdeterminism and its relation to nonlocality?

Yes, there is ongoing research and debate surrounding the concept of superdeterminism and its relation to nonlocality. Scientists continue to explore alternative explanations for nonlocality and gather evidence to support or refute the concept of superdeterminism. This is an active area of study in the field of quantum mechanics and there is no consensus yet on the ultimate explanation for nonlocality.

Similar threads

Replies
75
Views
8K
  • Quantum Physics
2
Replies
47
Views
3K
  • Quantum Interpretations and Foundations
Replies
3
Views
924
Replies
12
Views
2K
Replies
6
Views
1K
Replies
3
Views
1K
  • Quantum Interpretations and Foundations
Replies
8
Views
431
  • Quantum Physics
2
Replies
69
Views
4K
Replies
10
Views
2K
  • Special and General Relativity
Replies
2
Views
2K
Back
Top