Peter Morgan said:
Part of the conspiracy, at least, comes from the experimenter. One of a specific symmetry class of experimental apparatuses has to be constructed, typically over months, insofar as it used not to be easy to violate Bell inequalities. The material physics that allows us to construct the requisite correlations between measurement results is arguably pretty weird.
Furthermore, the standard way of modeling Bell inequality violating experiments in QM is to introduce projection operators to polarization states of a single frequency mode of light, which are non-local operators. [A propos of which, DrC, do you know of a derivation that is truly careful about the field-theoretic locality?] The QM model, in other words, is essentially a description of steady state, time-independent statistics that has specific symmetry properties. Since I take violation of Bell inequalities to be more about contextuality than about nonlocality, which specifically is implemented by post-selection of a number of sub-ensembles according to what measurement settings were in fact chosen, this seems natural to me, but I wonder what you think?
Remember that with me you have to make a different argument than you might make with someone who thinks the measurement results are noncontextually determined by the state of each of two particles, since for me whether measurement events occur is determined jointly by the measurement devices and the field they are embedded in.
I do wonder, but apparently that's how the statistics pile up. We have a choice of whether to just say, with Copenhagen, that we can say nothing at all about anything that is not macroscopic, or to consider what properties different types of models have to have in order to "explain" the results. A particle Physicist tells a causal story about what happens in experiments, using particles, anti-particles, and ghost and virtual particles, with various prevarications about what is really meant when one talks about such things (which is typically nonlocal if anything like Wigner's definition of a particle is mentioned, almost inevitably); so it seems reasonable to consider what prevarications there have to be in other kinds of models. It's good that we know moderately well what prevarications we have to introduce in the case of deBB, and that they involve a nonlocal trajectory dynamics in that case.
This might be true, I guess, although proving that superdeterminism is a hedge for all possible physical laws looks like tough mathematics to me. Is the same perhaps true for backward causation? Do you think it's an acceptable response to ask what constraints have to be put on superdeterminism (or backward causation) to make it give less away?
You're always welcome with me, DrC. I'm very pleased with your comments in this case. If you're ever in CT, look me up.
I like the Omphalos. Is it related to the heffalump?
Slightly after the above, I'm particularly struck by your emphasis on the degree of correlation required in the initial conditions to obtain the experimental results we see. Isn't the degree of correlation required in the past precisely the same as the degree of correlation that we note in the records of the experimental data? It's true that the correlations cannot be observed in the past without measurement of the initial state in outrageous detail across the whole of a time-slice of the past light-cone of a measurement event, insofar as there is any degree of dynamical chaos, but that doesn't take away from the fact that in a fine-grained enough description there is no change of entropy. [That last phrase is a bit cryptic, perhaps, but it takes my fancy a little. Measurements now are the same constraint on the state in the past as they are on the state now. Since they are actually observed constraints now, it presumably cannot be denied that they are constraints on the state now. If the actual experimental results look a little weird as constraints that one might invent now, then presumably they look exactly as weird as constraints on the state 10 years ago, no more and no less. As observed constraints, they are constraints on what models have to be like to be empirically adequate.] I'm worried that all this repetition is going to look somewhat blowhard, as it does a little to me now, so I'd be glad if you can tell me if you can see any content in it.
We have a lot of jackalopes in Texas, but few heffalumps.
---------------------------------
The issue is this: Bell sets limits on local realistic theories. So there may be several potential "escape" mechanisms. One is non-locality, of which the Bohmian approach is one which attempts to explicitly describe the mechanism by which Bell violations can occur. Detail analysis appears to provide answers to how this could match observation. BM can be explicitly critiqued and answers can be provided to those critiques.
Another is the "superdeterminism" approach. Under this concept, the initial conditions are just such that all experiments which are done will always show Bell violations. However, like the "fair sampling" loophole, the idea is that from the full universe of possible observations - those which are counterfactual - the true rate of coincidence does NOT violate a Bell Inequality. So there is a bias function at work. That bias function distorts the true results because the experimenter's free will is compromised. The experimenter can only select to perform measurements which support QM due to the experimenter's (naive and ignorant) bias.
Now, without regard to the reasonableness of that argument, I point out the following cases, in which the results are identical.
a) The experimenter's detector settings are held constant for a week at a time.
b) The settings are changed at the discretion of the experimenter, at any interval.
c) The settings are changed at due to clicks from a radioactive sample, per an automated system, over which the experimenter has no direct control.
d) A new hypothesis, that the experiments actually show that a Bell Inequality is NOT violated, but the data recording device is modified coincidentally to show results indicating that the Bell Inequality was violated.
In other words, we know we won't see any difference in a), b) and c). And if d) occurred, it would be a different form of "superdeterminism". So the question I am asking: does superdeterminism need to obey any rules? Does it need to be consistent? Does it need to be falsifiable? Because clearly, the a) case above should be enough to rule out superdeterminism (at least in my mind - the experimenter is exercising no ongoing choice past an initial point). The c) case requires that superdeterminism flows from one force to another, when the standard model does not show any such mechanism (since there is no known connection between an experimental optical setting and the timing of radioactive decay). And the d) case shows that there is always one more avenue by which we can float an ad hoc hypothesis.
So you ask: is superdeterminism a hedge for all physical laws? If you allow the above, one might then turn around and say: does it not apply to other physical laws equally? Because my answer is that if so, perhaps relativity is not a true effect - it is simply a manifestation of superdeterminism. All of those GPS satellites... they suffer from the idea that the experimenter is not free to request GPS information freely. So while results appear to follow GR, they really do not. How is this less scientific than the superdeterminism "loophole" as applied to Bell?
In other words, there is no rigorous form of superdeterminism to critique at this point past an ad hoc hypothesis. And we can formulate ad hoc hypotheses about any physical law. None of which will ever have any predictive utility. So I say it is not science in the conventional sense.
-----------------------
You mention contextuality and the subsamples (events actually recorded). And you also mention the "degree of correlation required in the initial conditions to obtain the experimental results we see". The issue I return to time after time: the bias function - the delta between the "true" universe and the observed subsample correlation rates - must itself be a function of the context. But it is sometimes negative and sometimes positive.
That seems unreasonable to me. Considering, of course, that the context is ONLY dependent on the relative angle difference and nothing else.
So we need a bias function that eliminates all other variables except the difference between measurement settings at a specific point in time. It must apply to entangled light, which will also show perfect correlations. But it must NOT apply to unentangled light (as you know, that is my criticism of the De Raedt model). And it must further return apparently random values in all cases. I believe these are all valid requirements of a superdeterministic model. As well as locality and realism, of course.