Why is superdeterminism not the universally accepted explanation of nonlocality?

In summary, the conversation discusses the concept of nonlocality and entanglement in a deterministic universe, where the information about instantaneous transfer is known to the universe. The conversation also touches upon the idea of superdeterminism, which some people reject due to its conspiratorial nature and lack of a concrete scientific theory. The possibility of interpreting nonlocality as an answer rather than a problem is also mentioned, as well as the importance of keeping beliefs aligned with measured reality. The conversation concludes with the suggestion that it may be better to believe in the existence of random and non-local phenomena rather than inventing longer explanations.
  • #246
ThomasT said:
Ah. Ok, still not sure about it. But that's ok. A bit off-topic. Of course it's your topic, so I guess you can swerve a bit now and then, so to speak. Not sure about the rules on that.

Anyway, yeah, I agree that it's generally not a good idea to blindly accept assertions from anybody, though, in my experience, DrC's assertions are generally pretty good. But nevertheless check them out.

yeah i didnt understand the part 2 of superdeterminism when i made this thread. i should have just said determinism
 
Physics news on Phys.org
  • #247
jadrian said:
from my thinking nonlocality and entanglement are never a problem because in a totally determinstic universe, the information about what is going to be instantaneously tranferred from a to b is already known to the universe. we may not be in block time but the universe acts as if it were. this is the first thing I've come across that agrees with my resolution of instantaneous info transfer.

even tho i personally believe that entanglement is basicly a zero sum static, and it is essentually noneffectual on the universe, just something we have to live with, but does not violate relativity because the information does not have any effect on anything anywhere. why is this not mainstream? do most people want to live in an undetermined future, thinking its closer to free will?
The idea that Nature conspire somehow to produce the results observed in Aspect, Gisin, Zeilinger and so on type of experiments (which do not necessarily involve strong determinism but only the existence of some pre-determinism at Planck level, no counterfactual definiteness in the experiments) is still a solution to the problem no doubt. And it is by no means above science. Unfortunately at this time this program is far from being even remotely close to the alternative programs which accept counterfactual definiteness in the experiments.

But this does not mean that it cannot become progressive in the future. We must remain open to this. In my view this is the best decision at this moment in time, inventing another 'no-go theorem' in this case (extremely shaky anyway) is at least counter-productive (and which could even prove to be an error far worse than von Neumann's 'proof' that hidden variables are impossible). Happily some leading scientists take it seriously, among them t'Hooft:

http://arxiv.org/abs/hep-th/0104219
http://arxiv.org/abs/quant-ph/0212095
http://arxiv.org/abs/hep-th/0105105
 
  • #248
ThomasT said:
@ lugita15,

The exchange between you and I got a little off track. Which was my fault, and I apologize for not taking the time to sort it out properly. Below I'll comment in reference to an exchange between you and zonde, and hopefully any misunderstanding will be clarified.
OK, and I think one thing that leads to misunderstanding is a terminology issue. You're using local determinism to refer to a philosophical stance, while you're using local realism to refer to a particular formal model which tries to implement this philosophical stance. I'm using both local realism and local determinism, pretty much interchangably, to refer to the philosophical stance, not to any formal model or formal constraint. So just keep that in mind when reading my posts.
I think that there's some step or steps in the LR line of reasoning which then lead(s) to the logically necessary conclusion that the correlation between θ and rate of coincidental detection should be linear. But I don't think it's the prediction of perfect correlation at θ = 0°. After all, QM predicts the same thing as LR at θ = 0°, but wrt all θ the QM and LR correlations are different. So it seems that we can't attribute that difference to the prediction of perfect correlation at θ = 0°.
OK, let me try once more to show you how the logic of Bell's theorem forces any local determinist to disagree with at least some of the predictions of quantum mechanics.

1. Pretend you are a local determinist who believes that all the experimental predictions of quantum mechanics is correct.
2. One of these experimental predictions is that entangled photons are perfectly correlated when sent through polarizers oriented at the same angle.
3. From this you conclude that both photons are consulting the same function P(θ). If P(θ)=1, then the photon goes through the polarizer, and if it equals zero the photon does not go through.
4. Another experimental prediction of quantum mechanics is that if the polarizers are set at different angles, the mismatch (i.e. the lack of correlation) between the two photons is a function R(θ) of the relative angle between the polarizers.
5. From this you conclude that the probability that P(-30)≠P(0) is R(30), the probability that P(0)≠P(30) is R(30), and the probability that P(-30)≠P(30) is R(60).
6. It is a mathematical fact that if you have two events A and B, then the probability that at least one of these events occurs (in other words the probability that A or B occurs) is less than or equal to the probability that A occurs plus the probability that B occurs.
7. From this you conclude that the probability that P(-30)≠P(30) is less than or equal to the probability that that P(-30)≠P(0) plus the probability that P(0)≠P(30), or in other words R(60)≤R(30)+R(30)=2R(30).

Which of these steps do you disagree with and why?
I already agree with this, and have said so many times in this thread. But you're not then done. This is where the assessment of the necessity of a local superdeterministic model of quantum entanglement begins.
But you're only agreeing that some particular formal model does not agree with the predictions of QM. In my 7-step argument above, I am trying to prove that ANY believer in local determinism MUST disagree with some of the predictions of QM.
Ok, now I disagree.

This is the basis of both the QM and LR treatments, but I would argue that, given this perfect correlation (ideally) at θ = 0°, one is not then forced to believe in a linear correlation wrt all values of θ. After all, the QM treatment leads to a nonlinear correlation wrt all θ.
The reason that quantum mechanics is able to have both perfect correlation at identical angles and nonlinear correlations as a function of angle is that QM does not say that the decision about whether the photon goes through the polarizer or not is predetermined by a function P(θ). In particular, if one polarizer is turned to -30 degrees and the other polarizer is turned to 30 degrees, quantum mechanics doesn't believe that the photons have a definite polarization at 0 degrees, and thus QM does not believe in P(0) which is essential for the proof above.
Then the question will be: what, exactly, has been proven, and does our understanding of BI violations necessarily warrant the assumption of superdeterminism in order to maintain a local deterministic view of our universe?
Yes, it does. The proof above decisively shows that in any local determinist universe, we must have R(60)≤2R(30). But the way superdeterminism gets around this is by saying that it is impossible to get accurate measurements of R(30) and R(60), because the experiment is rigged: since whatever is controlling the measurement decision interacted in the past with the (ancestors of) the entangled photons, the experimenters are selecting the angles just right (because the particles controlling them know the exact details of the function P(θ) for the entangled pair of photons) so that it appears that R(60)>2R(30).

That's why I said the following to you earlier in this thread:

"So here's another way to put it: An ordinary local realist theory just assumes that particles which are considered entangled according to QM must have had local interactions in the past which is determining their EPR-type nonlocal correlations today. But a local superdeterminist theory assumes that a particle must have interacted in the past with not only those that are entangled with it according to quantum mechanics, but also other particles which quantum mechanics would say have no connection with it. This is how a local superdeterministic theory would be able to produce Bell-type nonlocal correlations."

And again, remember that when I say local realism I mean the philosophical stance you call local determinism.
 
  • #249
jadrian said:
youre pretty much proving my point. the info is essentially static until reconciled by the experiment, which will always occur slower than c.
Yes, the local information will always show boring 50-50 results, and it's only when you make a slower-than-light comparison of the data that you are able to see the nonlocal correlation.
 
  • #250
jadrian said:
if the nonlocal stuff cannot have ftl effects on its surroundings then it can be regarded as not existing.
Well, it depends on your terminology. It's as if the faster-than-light stuff has effects, but those effects cannot be discovered until you do slower-than-light communication or travel.
 
  • #251
jadrian said:
yeah i didnt understand the part 2 of superdeterminism when i made this thread. i should have just said determinism
So now that you understand what the part 2 is (conspiratorial initial conditions), do you consider yourself a nonlocal determinist like the Bohmians, as opposed to a superdeterminist?
 
  • #252
lugita15 said:
Duh, the lasers can see the entire future of the universe and know which photons will later be considered entangled according to QM. Duh, it changes their undetectable hidden variables, which don't have any effect on the particles until an entanglement experiment is done.

DrChinese, let out your inner conspiracy theorist! :smile:

:biggrin:

I guess I just don't have what it takes... I feel so inadequate.
 
  • #253
lugita15 said:
I agree that this is the point of contention, but keep in mind that he thinks a local realist can believe in the nonlinear correlation given by Malus' law, while at the same time also believing that there is perfect correlation at identical settings. I hope you agree that he's wrong on this point.
Malus' law does not describe correlations between two photons but intensity change for single beam of light.
Otherwise yes, I do agree that this doesn't work.

lugita15 said:
Out of curiosity, which experimental loophole of Bell tests do you cling onto? Detector efficiency, communication, freedom of choice, or something else?
It's "fair sampling assumption not holding for photons" loophole.
 
  • #254
zonde said:
Malus' law does not describe correlations between two photons but intensity change for single beam of light.
Yes, I was just using the term in a generalized sense to refer to sinusoidal θ-dependence.
Otherwise yes, I do agree that this doesn't work.
I hope we can convince ThomasT of that.
It's "fair sampling assumption not holding for photons" loophole.
But surely, as technology improves, it should be fairly easy to send photons one at a time, have every pair be entangled, and have every pair be collected by a photon detector, so no sampling issue will arise. Also, haven't entanglement experiments been done on all kinds of things, including qubits in the context of quantum computing, so don't you need a more general objection to Bell's theorem than just photons?
 
  • #255
lugita15 said:
But surely, as technology improves, it should be fairly easy to send photons one at a time, have every pair be entangled, and have every pair be collected by a photon detector, so no sampling issue will arise. Also, haven't entanglement experiments been done on all kinds of things, including qubits in the context of quantum computing, so don't you need a more general objection to Bell's theorem than just photons?

You are correct, and you probably already know this, but most consider this to be a disproof of the fair sampling assumption:

http://www.nature.com/nature/journal/v409/n6822/full/409791a0.html

For reasons I do not fully understand, most local realists simply reject this by pointing out that the locality loophole (closed by Weihs et at in 1998) is not closed simultanoeously.
 
  • #256
DrChinese said:
You are correct, and you probably already know this, but most consider this to be a disproof of the fair sampling assumption:

http://www.nature.com/nature/journal/v409/n6822/full/409791a0.html

For reasons I do not fully understand, most local realists simply reject this by pointing out that the locality loophole (closed by Weihs et at in 1998) is not closed simultanoeously.
I brought this up with zonde in another thread, and his response was somewhat strange:
zonde said:
Oh, but I said that it does not hold in photon experiments.
Or do you want to argue that we can apply one to one results of ion experiment to photon experiment?
zonde said:
Basically you have to assume that Bell inequality violations appear due to the same (unknown) physical mechanism in ion experiments and photon experiments only then it means something. Obviously it is much more preferable to avoid such assumptions.

Apparently, he thinks that different kinds of particles exploit different loopholes to Bell's theorem! He believes that ions exploit the communication loophole, while photons exploit the fair sampling loophole.
 
  • #257
lugita15 said:
Demystifier, can you answer another question about Bohmian mechanics? ... If everything in the universe is interacting nonlocally with everything else through their pilot waves, then why is it that we only observe the nonlocal correlation caused by this nonlocal interaction when we do measurements of entangled particles? Is there something special about entanglement that reveals the nonlocal interactions that are always present?
In Bohmian mechanics, it is not true that everything in the universe is interacting with everything else. Instead, in Bohmian mechanics a particle interacts with another particle through a quantum potential ONLY when there is entanglement.
 
  • #258
Demystifier said:
In Bohmian mechanics, it is not true that everything in the universe is interacting with everything else. Instead, in Bohmian mechanics a particle interacts with another particle through a quantum potential ONLY when there is entanglement.
First of all, I thought the nonlocal stuff like entanglement was handled through the pilot wave, not the quantum potential. I thought before the particle gets to any place, the pilot wave has already gone faster than the speed of light to that location, collected information about it, and has given that info to the particle. So if there is a double slit experiment coming up ahead, the pilot wave goes through the double slit, and depending on what detectors the apparatus has it tells the particle what trajectory to travel through. Do I have that roughly right? If so, doesn't this constitute nonlocal interaction between the particles and distant objects like the double slit apparatus which seemingly have nothing to do with it?

Also, what exactly is the Bohmian view of entanglement?
 
  • #259
lugita15 said:
First of all, I thought the nonlocal stuff like entanglement was handled through the pilot wave, not the quantum potential.
The quantum potential is a quantity uniquely determined by the pilot wave. So anything handled by the quantum potential is handled also by the pilot wave.

lugita15 said:
I thought before the particle gets to any place, the pilot wave has already gone faster than the speed of light to that location, collected information about it, and has given that info to the particle.
The pilot wave does not travel faster than the speed of light.

lugita15 said:
Also, what exactly is the Bohmian view of entanglement?
Just as in standard QM, the wave function (which is the same thing as pilot wave) of many particles is entangled when this wave function cannot be written as a product of wave functions of single particles.
 
  • #260
Demystifer, if pilot waves don't go faster than light, then what explains the nonlocality of entanglement? Does the quantum potential propagate faster than light?

Also, am I wrong in my impression that a particle's trajectory right now is determined in part by the apparatuses it knows, based on nonlocal interaction, that it's going to encounter later?
 
  • #261
lugita15 said:
OK, and I think one thing that leads to misunderstanding is a terminology issue. You're using local determinism to refer to a philosophical stance, while you're using local realism to refer to a particular formal model which tries to implement this philosophical stance.
Yes, I think it's a good idea to keep the technical physics meaning of local realism separate from the philosophical meaning of local determinism.

lugita15 said:
I'm using both local realism and local determinism, pretty much interchangably, to refer to the philosophical stance, not to any formal model or formal constraint. So just keep that in mind when reading my posts.
I'll keep that in mind wrt your posts. But I think it would be a good idea to separate the two.

lugita15 said:
I am trying to prove that ANY believer in local determinism MUST disagree with some of the predictions of QM.
Ok, it's clear to me now that that's what you're trying to prove.

lugita15 said:
The reason that quantum mechanics is able to have both perfect correlation at identical angles and nonlinear correlations as a function of angle is that QM does not say that the decision about whether the photon goes through the polarizer or not is predetermined by a function P(θ).
Bell showed that the view that individual detection is determined by some (LR) function guiding photon behavior is compatible with QM. A LR model of individual detection isn't a problem, and isn't ruled out. It's trying to model coincidental detection in terms of the function that determines individual detection that's a problem, and is ruled out.

The crux of why I think one can be a local determinist while still believing that Bell-type LR models of quantum entanglement are ruled out is because of the assumption that what determines individual detection is not the same underlying parameter as what determines coincidental detection.

The assumption regarding individual detection is that it's determined by the value of some locally produced (eg., via common emission source) property (eg., the electrical vector) of the photon incident on the polarizing filter. It's further assumed that this is varying randomly from entangled pair to entangled pair. So, there is a 50% reduction in detection rate at each of the individual detectors with the polarizers in place (compared to no polarizers), and a random accumulation of detections. (Wrt individual detection, LR and QM predictions are the same).

The assumption regarding coincidental detection is that, wrt each entangled pair, what is being measured by the joint polarizer settings is the locally produced (eg., via common emission source) relationship between the polarizer-incident photons of a pair.

Because A and B always record identical results, (1,1) or (0,0) wrt a given coincidence interval when the polarizers are aligned, and because the rate of coincidental detection varies predictably (as cos2θ in the ideal), then it's assumed that the underlying parameter (the locally produced relationship between the photons of a pair) determining coincidental detection isn't varying from pair to pair. It might be further assumed that the the value of the relevant property is the same for each photon of a given pair (ie., that the separated polarizers are measuring exactly the same value of the same property wrt any given pair). But that value only matters wrt individual detection, not wrt coincidental detection.

And here's the problem. The LR program requires that coincidental detection be modeled in terms of the underlying parameter that determines individual detection. But how can it do that if the underlying parameter that determines coincidental detection is different than the underlying parameter that determines individual detection?

There have been attempts to model entanglement this way (ie., in terms of an unchanging underlying parameter that doesn't vary from entangled pair to entangled pair), but they've rejected as being non-Bell-type LR models.

Regarding your 12 step LR reasoning (reproduced below), the problem begins in trying to understand coincidental detection in terms of step 2.

I hope the above makes it clearer why I think that one can believe that the LR program (regarding the modelling of quantum entanglement) is kaput, while still believing that the best working assumptions are that our universe is evolving locally deterministically. And so, no need for superdeterministic theories of quantum entanglement.

--------------------------------------------------------------------
lugita15 said:
1. If you have an unpolarized photon, and you put it through a detector, it will have a 50-50 chance of going through, regardless of the angle it's oriented at.

2. A local realist would say that the photon doesn't just randomly go through or not go through the detector oriented at an angle θ; he would say that each unpolarized photon has its own function P(θ) which is guiding it's behavior: it goes through if P(θ)=1 and it doesn't go through it P(θ)=0.

3. Unfortunately, for any given unpolarized photon we can only find out one value of P(θ), because after we send it through a detector and it successfully goes through, it will now be polarized in the direction of the detector and it will "forget" the function P(θ).

4. If you have a pair of entangled photons and you put one of them through a detector, it will have a 50-50 chance of going through, regardless of the angle it's oriented at, just like an unpolarized photon.

5. Just as above, the local realist would say that the photon is acting according to some function P(θ) which tells it what to do.

6. If you have a pair of entangled photons and you put both of them through detectors that are turned to the same angle, then they will either both go through or both not go through.

7. Since the local realist does not believe that the two photons can coordinate their behavior by communicating instantaneously, he concludes the reason they're doing the same thing at the same angle is that they're both using the same function P(θ).

8. He is in a better position than he was before, because now he can find out the values of the function P(θ) at two different angles, by putting one photon through one angle and the other photon through a different angle.

9. If the entangled photons are put through detectors 30° apart, they have 25% chance of not matching.

10. The local realist concludes that for any angle θ, the probability that P(θ±30°)≠P(θ) is 25%, or to put it another way the probability that P(θ±30°)=P(θ) is 75%.

11. So 75% of the time, P(-30)=P(0), and 75% of the time P(0)=P(30), so there's no way that P(-30)≠P(30) 75% of the time.

12. Yet when the entangled photons are put through detector 60°, they have a 75% chance of not matching, so the local realist is very confused.
----------------------------------------------------------------------
 
Last edited:
  • #262
lugita15 said:
But surely, as technology improves, it should be fairly easy to send photons one at a time, have every pair be entangled, and have every pair be collected by a photon detector, so no sampling issue will arise.
Maybe not easy but certainly feasible. And yet there are no reports about experiments with improved pair detection efficiency.

lugita15 said:
Also, haven't entanglement experiments been done on all kinds of things, including qubits in the context of quantum computing, so don't you need a more general objection to Bell's theorem than just photons?
I don't have any objections to Bell's theorem.
Speaking about experiments, different experiments can have different loopholes or different sources of systematic errors.

But photon tests are way ahead of other entanglement experiments in terms of attention they have got, analysis made and different modifications of similar experiments preformed. So I would like to stick to photon experiments.
 
  • #263
ThomasT said:
But how can it do that if the underlying parameter that determines coincidental detection is different than the underlying parameter that determines coincidental detection?

ThomasT, please can you clarify what you mean here? I'm guessing you just typed this wrong and that one "coincidental" should have read "individual".

If so, how could a coincidental parameter/function etc. possibly be different from an individual one?
 
  • #264
Joncon said:
ThomasT, please can you clarify what you mean here? I'm guessing you just typed this wrong and that one "coincidental" should have read "individual".
Thanks, I should have proof read what I wrote. I just corrected it.

Joncon said:
If so, how could a coincidental parameter/function etc. possibly be different from an individual one?
The underlying parameter or function that determines individual detection is assumed to be, for any given entangled pair, some value of some property. This value is assumed to vary, randomly, from pair to pair, because the rate of individual detection doesn't vary as a function of polarizer setting.

The underlying parameter or function that determines coincidental detection is assumed to be the relationship between those values. This relationship is assumed to not vary from pair to pair, because the rate of coincidental detection varies, predictably, as a function of the angular difference between the joint polarizer settings.
 
  • #265
ThomasT, I just don't understand your point. If you are a (nonsuperdeterministic) local determinist, and you find that entangled photons measured at polarizers oriented at the same angle behave identically, you can have only one possible response: "The photons are not coordinating their behavior through faster-than-light communication. Rather they are each deciding to go through or not go the polarizer based on a common function P(θ), which equals 1 if the photon is supposed to go through and 0 if not." If you do not agree with this response, how can you consider yourself a local determinist?
 
Last edited:
  • #266
ThomasT said:
Thanks, I should have proof read what I wrote. I just corrected it.

The underlying parameter or function that determines individual detection is assumed to be, for any given entangled pair, some value of some property. This value is assumed to vary, randomly, from pair to pair, because the rate of individual detection doesn't vary as a function of polarizer setting.

The underlying parameter or function that determines coincidental detection is assumed to be the relationship between those values. This relationship is assumed to not vary from pair to pair, because the rate of coincidental detection varies, predictably, as a function of the angular difference between the joint polarizer settings.
But a "coincidental detection" is not some magical action. It is nothing more than performing "individual detections" on each of the two particles. So definitionally, what determines the result of a coincidental detection is just what determines the results of individual detections.
 
  • #267
lugita15 said:
ThomasT, I just don't understand your point. If you are a (nonsuperdeterministic) local determinist, and you find that entangled photons measured at polarizers oriented at the same angle behave identically, you can have only one possible response: "The photons are not coordinating their behavior through faster-than-light communication.
Right, but that's just one part of why I'm a (nonsuperdeterministic) local determinist who thinks the mainstream LR program was effectively ruled out by Bell almost 50 years ago.

lugita15 said:
Rather they are each deciding to go through or not go the polarizer, through based on a common function P(θ), which equals 1 if the photon is supposed to go through and 0 if not."
Or rather, because entangled photons measured at polarizers oriented at the same angle behave identically, and also because rate of joint detection varies as θ varies, then it's assumed that the underlying parameter that's determining joint detection isn't varying from pair to pair. And because individual detection doesn't vary as the polarizer setting varies, then it's assumed that the underlying parameter that's determining individual detection is varying from pair to pair. Hence, the assumption that there is a different underlying parameter or function determining coincidental detection and individual detection. But the LR program requires that coincidental detection be modeled in terms of the same underlying parameter or function that's determining individual detection.
 
Last edited:
  • #268
lugita15 said:
But a "coincidental detection" is not some magical action. It is nothing more than performing "individual detections" on each of the two particles. So definitionally, what determines the result of a coincidental detection is just what determines the results of individual detections.
The rate of individual detection doesn't vary with the measurement parameter, but the rate of coincidental detection does vary with the measurement parameter. So, what would you infer from this?
 
  • #269
ThomasT said:
Or rather, because entangled photons measured at polarizers oriented at the same angle behave identically, and also because rate of joint detection varies as θ varies, then it's assumed that the underlying parameter that's determining joint detection isn't varying from pair to pair. And because individual detection doesn't vary as the polarizer setting varies, then it's assumed that the underlying parameter that's determining individual detection is varying from pair to pair. Hence, the assumption that there is a different underlying parameter or function determining coincidental detection and individual detection. But the LR program requires that coincidental detection be modeled in terms of the same underlying parameter or function that's determining individual detection.

The point is that in an LR theory, each photon is using one and only one function. Each photon is acting individually, totally unaware of what is happening with it's entangled partner. So you either accept that the individual and coincidental functions are the same thing, or you make them different. If so, then the photons "choose" which function to use based on how they will be measured - and then you're back to superdeterminism.
 
  • #270
Joncon said:
The point is that in an LR theory, each photon is using one and only one function. Each photon is acting individually, totally unaware of what is happening with it's entangled partner. So you either accept that the individual and coincidental functions are the same thing, or you make them different.
Ok, I believe they're different.

Joncon said:
If so, then the photons "choose" which function to use based on how they will be measured - and then you're back to superdeterminism.
If you could phrase this a bit less anthropically, that would be helpful. Photons aren't people.

We're talking about different measurement parameters. Is it unreasonable to suppose that these different measurement parameters are measuring different underlying parameters?
 
  • #271
ThomasT said:
The rate of individual detection doesn't vary with the measurement parameter, but the rate of coincidental detection does vary with the measurement parameter. So, what would you infer from this?
But there isn't some magical thing called "joint detection" or "coincidence detection". Rather, each experimenter just does individual detection of each photon, and records the results. We draw conclusions about the "rate of coincidental detection" AKA the correlation based on the results of individual detections. Since there is no such thing as coincidence detection, and correlation is nothing but correlation of individual detections, an analysis of entanglement cannot consist, even in principle, of anything other than asking what determines the results of individual detection.
 
  • #272
ThomasT said:
If you could phrase this a bit less anthropically, that would be helpful. Photons aren't people.

I agree, that's why I double-quoted "choose". Okay, to put it another way, what determines which function (individual/coincidental) is applied to the photons?

ThomasT said:
We're talking about different measurement parameters. Is it unreasonable to suppose that these different measurement parameters are measuring different underlying parameters?

But they're not different measurement parameters. Each photon has it's polarization measured. That's it.
 
  • #273
lugita15 said:
But there isn't some magical thing called "joint detection" or "coincidence detection".
Who said anything about magic? The term rate of coincidental detection refers to a statistical accumulation, and that statistical accumulation varies as the measurement parameter varies. The term rate of individual detection also refers to a statistical accumulation, and that statistical accumulation doesn't vary as the measurement parameter varies. I asked what you might infer from this fact.

lugita15 said:
Since there is no such thing as coincidence detection, and correlation is nothing but correlation of individual detections ...
Of course there's such a thing as coincidence detection. What do you think Bell tests are about? The term Bell correlations refers to correlations between θ, the angular difference between the separated polarizers, and the rate of coincidental detection.

lugita15 said:
... an analysis of entanglement cannot consist, even in principle, of anything other than asking what determines the results of individual detection.
I would guess that that's what a lot of people think. And therein lies much of the confusion surrounding the meaning of Bell's theorem.

Anyway, of course an analysis of entanglement can consist of something other than asking what determines the results of individual detection. It starts with recognizing that the rates of individual and coincidental detection are determined by different parameters.
 
  • #274
Joncon said:
I agree, that's why I double-quoted "choose". Okay, to put it another way, what determines which function (individual/coincidental) is applied to the photons?
The measurement parameter.

Joncon said:
But they're not different measurement parameters.
Yes they are. The orientation of an individual polarizer is a different measurement parameter than the angular difference between two polarizer orientations.
 
  • #275
ThomasT said:
Who said anything about magic? The term rate of coincidental detection refers to something, and that something varies as the measurement parameter varies. There's also something called rate of individual detection, and that something doesn't vary as the measurement parameter varies. I asked what you might infer from this fact.
But whatever these rates are, they do not arise full-grown from the head of Zeus, do they? They are calculated solely from the results of individual detections of photons. Thus the only thing that can affect these rates are those results. So explaining the "rate of coincidence detection" consists of another more and nothing less than explaining the results of individual detection.
Of course there's such a thing as coincidence detection. What do you think Bell tests are about? The term Bell correlations refers to correlations between θ, the angular difference between the separated polarizers, and the rate of coincidental detection.
There is no experimental procedure called "coincidence detection", so the term "rate of coincidence detection" is highly misleading. Coincidences aren't "detected" experimentally, they are a consequence of individual detections.
Anyway, of course an analysis of entanglement can consist of something other than asking what determines the results of individual detection. It starts with recognizing that the rates of individual and coincidental detection are determined by different parameters.
They are both entirely determined by the same thing, the results of individual detections; that is, whether photon A from pair N goes through the polarizer oriented at the angle θ, to which the answer is either yes or no. I don't know how you can possibly disagree with this.
 
  • #276
ThomasT said:
Yes they are. The orientation of an individual polarizer is a different measurement parameter than the angular difference between two polarizer orientations.

But when photon A encounters polarizer A there's no such thing as "angular difference between two polarizer orientations". A (photon or polarizer) has no knowledge of what is happening at B.
 
  • #277
@ Joncon and lugita15,

I think this is a case of "not seeing the forest for the trees". There are two different measurement contexts to consider. The results wrt which are determined by different parameters, both measurement and assumed underlying.

I'm going to take a time out now. Please reread what I've written. Think about it some more. And I'll get back to you in a few hours.
 
  • #278
Joncon said:
But when photon A encounters polarizer A there's no such thing as "angular difference between two polarizer orientations".
Right, this is an individual measurement context. Do you think there's a difference between this measurement context and the one where coincidental detections are correlated with θ?
 
  • #279
lugita15 said:
They are both entirely determined by the same thing ... I don't know how you can possibly disagree with this.
Read my most recent posts again. I'll get back to you.
 
  • #280
lugita15 said:
There is no experimental procedure called "coincidence detection" ...
Sure there is. There's circuitry that matches detection attributes which operates according to calculations based on the photon emission source and the distance between the polarizers.

lugita15 said:
... so the term "rate of coincidence detection" is highly misleading. Coincidences aren't "detected" experimentally, they are a consequence of individual detections.
They're a consequence of matching individual detection attributes wrt calculated coincidence intervals.

Whether coincidental detections are counted 'on the fly' by circuitry built into the experimental design, or after the fact via time stamps, the fact is that the basic datum of entanglement setups (eg., Bell tests) is called coincidental detection, and the rate of coincidental detection varies as a function of θ, the angular difference between the polarizer settings.

So, given that the rate of individual detection doesn't vary as a function of polarizer setting, then what can you infer from this?

lugita15 said:
They are both entirely determined by the same thing ...
No. Incorrect inference. This doesn't follow from the known experimental results.
 
<h2>1. Why is superdeterminism not the universally accepted explanation of nonlocality?</h2><p>Superdeterminism is not the universally accepted explanation of nonlocality because it goes against the widely accepted principle of free will. Superdeterminism suggests that all events, including human decisions, are predetermined and therefore there is no true randomness or free will in the universe. This goes against our understanding of human agency and the ability to make choices.</p><h2>2. What evidence supports the rejection of superdeterminism as an explanation for nonlocality?</h2><p>One of the main pieces of evidence against superdeterminism is the violation of Bell's inequality, which suggests that there is a limit to how much information can be hidden from an observer. If superdeterminism were true, this limit would not exist and the observed correlations in nonlocal systems would not be possible.</p><h2>3. Are there alternative explanations for nonlocality other than superdeterminism?</h2><p>Yes, there are alternative explanations for nonlocality that do not rely on the concept of superdeterminism. Some theories suggest that there are hidden variables or hidden information that can explain the observed correlations in nonlocal systems without resorting to predetermined events.</p><h2>4. What implications would accepting superdeterminism have on our understanding of the universe?</h2><p>If superdeterminism were to be accepted as the explanation for nonlocality, it would have significant implications on our understanding of the universe. It would mean that all events, including our thoughts and actions, are predetermined and there is no true randomness or free will. This would challenge our understanding of causality and the role of human agency in shaping our reality.</p><h2>5. Is there ongoing research and debate surrounding the concept of superdeterminism and its relation to nonlocality?</h2><p>Yes, there is ongoing research and debate surrounding the concept of superdeterminism and its relation to nonlocality. Scientists continue to explore alternative explanations for nonlocality and gather evidence to support or refute the concept of superdeterminism. This is an active area of study in the field of quantum mechanics and there is no consensus yet on the ultimate explanation for nonlocality.</p>

1. Why is superdeterminism not the universally accepted explanation of nonlocality?

Superdeterminism is not the universally accepted explanation of nonlocality because it goes against the widely accepted principle of free will. Superdeterminism suggests that all events, including human decisions, are predetermined and therefore there is no true randomness or free will in the universe. This goes against our understanding of human agency and the ability to make choices.

2. What evidence supports the rejection of superdeterminism as an explanation for nonlocality?

One of the main pieces of evidence against superdeterminism is the violation of Bell's inequality, which suggests that there is a limit to how much information can be hidden from an observer. If superdeterminism were true, this limit would not exist and the observed correlations in nonlocal systems would not be possible.

3. Are there alternative explanations for nonlocality other than superdeterminism?

Yes, there are alternative explanations for nonlocality that do not rely on the concept of superdeterminism. Some theories suggest that there are hidden variables or hidden information that can explain the observed correlations in nonlocal systems without resorting to predetermined events.

4. What implications would accepting superdeterminism have on our understanding of the universe?

If superdeterminism were to be accepted as the explanation for nonlocality, it would have significant implications on our understanding of the universe. It would mean that all events, including our thoughts and actions, are predetermined and there is no true randomness or free will. This would challenge our understanding of causality and the role of human agency in shaping our reality.

5. Is there ongoing research and debate surrounding the concept of superdeterminism and its relation to nonlocality?

Yes, there is ongoing research and debate surrounding the concept of superdeterminism and its relation to nonlocality. Scientists continue to explore alternative explanations for nonlocality and gather evidence to support or refute the concept of superdeterminism. This is an active area of study in the field of quantum mechanics and there is no consensus yet on the ultimate explanation for nonlocality.

Similar threads

Replies
75
Views
8K
  • Quantum Physics
2
Replies
47
Views
3K
  • Quantum Interpretations and Foundations
Replies
3
Views
924
Replies
12
Views
2K
Replies
6
Views
1K
Replies
3
Views
1K
  • Quantum Interpretations and Foundations
Replies
8
Views
431
  • Quantum Physics
2
Replies
69
Views
4K
Replies
10
Views
2K
  • Special and General Relativity
Replies
2
Views
2K
Back
Top