Thanks for the feedback. I have more questions.
Demystifier said:
In deterministic theory the initial conditions are arbitrary. In superdeterministic theory the initial conditions are "fine tuned" or "conspired" such that some additional regularity emerges.
Ok, so a nonlocal deterministic theory can correctly predict the coincidental photon flux for any joint setting of the polarizers, while keeping lambda (the hidden variable which determines individual photon flux) random, because it allows paired (entangled) photons to communicate instantaneously or ftl.
On the other hand, a local deterministic theory cannot correctly predict the coincidental photon flux for some joint settings of the polarizers, while keeping lambda random, because it forbids paired (entangled) photons from communicating instantaneously or ftl.
But, a local superdeterministic theory can correctly predict the coincidental photon flux for any joint setting of the polarizers, because ... why? Because lambda is not described as varying randomly? If that's the case, then how is lambda described? Or is it something else?
I'm inclined to go with DrC's assessment that "Superdeterminism itself is a horrible crock of anti-science." But the fact is that I really don't know what it means (other than determinism with a superfluous prefix). So, I'm hoping that you or somebody will elaborate a bit.
Demystifier said:
In the context of quantum mechanics, this regularity is correlations that cannot be explained by random initial conditions and local interactions between hidden variables.
My understanding is that standard QM correctly predicts coincidental photon flux for any joint setting of the polarizers, while keeping lambda random, because it models coincidental photon flux in terms of the relationship between paired (entangled) photons. This relationship is an underlying, global parameter which doesn't require interaction between the paired (entangled) photons, and which is analyzed by the global measurement parameter of joint polarizer settings. The results are in line with the optics understanding of the behavior of photons locally interacting with polarizers. This can be illustrated by considering a simple optical Bell setup where, say, polarizer A is moved to the side with polarizer B -- then the coincidental photon flux follows the same cos^2 Theta angular dependency as when the polarizers are on opposite sides. In the case with polarizers A and B on the same side there's no need to posit nonlocal interactions to understand the observed angular dependency. So why should it be necessary when the polarizers are on opposite sides? But this is for understanding. In order to make an explicitly LRHV model of the coincidental photon flux with the polarizers on opposite sides, then the impossibility of instantaneous or ftl communication between the paired (entangled) photons has to be explicitly encoded into the model. And there doesn't seem to be any way of (clearly) doing that without skewing the predictions of such a model, even though the angular dependency remains essentially the same.
Anyway, I still have the tentative opinion that the cited paper doesn't improve our understanding of quantum entanglement or Bell's theorem ... that it basically just adds to the confusion surrounding the interpretations of these things.
You haven't yet said what you think of the paper. So, what do you think of it?