## Arguments Against Superdeterminism

 Quote by ThomasT If it were that unreliable, it wouldn't be common.
Either unreliable or (not?) so often used...

 Quote by ThomasT The fact that we continue to behave inductively can, at the level of our behavior, be understood as being due to the fact that it usually works.
worked, in the past.

Induction reasons from observed to unobserved.
You are reasoning from observed instances, in the past, where induction worked, to as yet unobserved instances in the future.
You're using induction to justify your belief in induction.

 We can treat induction as an assumption, or as a method of reasoning, but it's more generally a basic behavioral characteristic. We behave inductively. It's part of our common behavioral heritage, our common sense.
Sure, we behave irrationally all the time.
Using induction involves an unjustified assumption.
That doesn't mean, once we make the assumption we can't proceed rationally.
As Hume said, induction is mere habit.

We can, of course, use it rationally, but we can't justify its usage.

 Quote by ThomasT Using the standard referents for emitter, polarizer, and detector, in a simple optical Bell test setup involving emitter, 2 polarizers, and 2 detectors it's pretty easy to demonstrate that the polarizer settings aren't determined by the detector settings, or by the emitter, or by anything else in the design protocol except "whatever is used to change" the polarizer settings.
That may be true if you only look at the macroscopic description of the detector/emitter/etc. We do not know if the motion of particles in these objects exert an influence on the motion of particles in other objects because we only have a statistical description. The individual trajectories might be correlated even if the average force between macroscopic objects remains null.

 It's been demonstrated that the method that's used to change the polarizer settings, and whether it's a randomized process or not, isn't important wrt joint detection rate. What is important is the settings that are associated with the detection attributes via the pairing process -- not how the settings themselves were generated.
I agree but this is irrelevant. Nothing has been demonstrated regarding the individual results obtained.

 It's already well established that detector orientations don't trigger emissions
How?

 -- and changing the settings while the emissions are in flight has no observable effect on the correlations.
I wouldn't expect that anyway.

 If you want to say that these in-flight changes are having some (hidden) effect, then either there are some sort of nonlocal hidden variable(s) involved, or, as you suggest, there's some sort of heretofor unknown, and undetectable, local field that's determining the correlations. Your suggestion seems as contrived as the nonlocal models -- as well as somewhat incoherent wrt what's already known (ie., wrt working models).
The field is the "hidden variable", together with particles' positions, so, in this sense, is not known. However, if this field determines particles' motions it has to appear, on a statistical level as the EM, weak, color field. The question is if such a field can be formulated. Unfortunately I cannot do it myself as I don't have the required skills but I wonder if it could be achieved, or it is mathematically impossible. Now, in the absence of a mathematical formulation it is premature to say that it must be contrived, or that the hypothesis is not falsifiable.

 I still don't know what the distinguishing characteristics of a superdeterministic theory are. Can you give a general definition of superdeterminism that differentiates it from determinism? If not, then you're OP is just asking for (conclusive or definitive) arguments against the assumption of determinism. There aren't any. So, the assumption that the deep reality of Nature is deterministic remains the defacto standard assumption underlying all physical science. It isn't mentioned simply because it doesn't have to be. It's generally taken for granted -- not dismissed.
What I am interested is local determinism, as non-local determinism is clearly possible (Bohm's interpretation). There are many voices that say that it has been ruled out by EPR experiments. I am interested in seeing if any strong arguments can be put forward against the use of SD loophole (denying the statistical independence between emitter and detector) to reestablish local deterministic theories as a reasonable research object.

 Quote by ueit What I am interested is local determinism, as non-local determinism is clearly possible (Bohm's interpretation). There are many voices that say that it has been ruled out by EPR experiments. I am interested in seeing if any strong arguments can be put forward against the use of SD loophole (denying the statistical independence between emitter and detector) to reestablish local deterministic theories as a reasonable research object.
ueit,

Statistics are calculated mathematically from individual measurements. They are aggregate observations about likelihoods. Determinism deals with absolute rational causation. It would be an inductive fallacy to say that statistics can tell us anything about the basic mechanisms of natural causation.

Linguistically, we may use statistics as 2nd order explanations in statements of cause and effect, but it is understood that statistical explanations never represent true causation. If determinism exists, there must necessarily be some independent, sufficient, underlying cause - some mechanism of causation.

Because of the problem of induction, no irreducible superdeterministic explanation can prove anything about the first order causes and effects that basic physics is concerned with.

The very concept of locality necessarily implies classical, billiard-ball style, momentum transfer causation. The experiments of quantum mechanics have conclusively falsified this model.

 Quote by ThomasT Not disproven, but supplanted by qm wrt certain applications. Classical physics is used in a wide variety of applications. Sometimes, in semi-classical accounts, part of a system is treated classically and the other part quantum mechanically.
ThomasT, I can treat my sister as if she's my aunt. That doesn't make it true :). Local causation stemming from real classical particles and waves has been falsified by experiments. EPRB type experiments are particularly illustrative of this fact.

 Yes, it can't be proven. Just reinforced by observations. Our windows on the underlying reality of our universe are small and very foggy. However, what is known suggests that the deep reality is deterministic and locally causal.
If there is evidence of deep reality being deterministic, I would like to know what it is :). As for the universe being locally deterministic, this has been proven impossible. See above.

 Induction is justified by its continued practical utility. A general understanding for why induction works at all begins with assumption of determinism.
So we're supporting determinism by assuming determinism?

 As you indicated earlier, we can't actually look at the quantum level, but the assumption of determinism is kept because there is evidence that there are quantum-scale mechanisms for physical causation.
If the evidence is inductive, then since you claim induction relies on an assumption of determinism itself, there is no evidence at all. I'm not denying the idea that there could be evidence for basic determinism, but the only evidence I've seen proposed here so far has been ethical. It has been assumptions about what we should believe and what's practical, rather than what we can know or what's true.

 Quote by ThomasT A higher order explanation isn't necessarily superfluous, though it might, in a certain sense, be considered as such if there exists a viable lower order explanation for the same phenomenon.
ThomasT,

If there is no viable lower order explanation then by definition you aren't dealing with a higher order explanation. Higher order explanations, as such, are not necessary, and unless they are reducible to first order explanations, they cannot be sufficient either.

Basically, they aren't true causes (or explanations) at all.

 Quote by ueit What I am interested is local determinism, as non-local determinism is clearly possible (Bohm's interpretation). There are many voices that say that it has been ruled out by EPR experiments.
Lhv formalisms of quantum entangled states are ruled out -- not the possible existence of lhv's. As things stand now, there's no conclusive argument for either locality or nonlocality in Nature. But the available physical evidence suggests that Nature behaves deterministically according to the principle of local causation.

 Quote by ueit I am interested in seeing if any strong arguments can be put forward against the use of SD loophole (denying the statistical independence between emitter and detector) to reestablish local deterministic theories as a reasonable research object.
You've already agreed that the method of changing the polarizer settings, as well as whether or not they're changed while the emissions are in flight, incident on the polarizers, is irrelevant to the rate of joint detection.

The reason that Bell inequalities are violated has to do with the formal requirements due to the assumption of locality. This formal requirement also entails statistical independence of the accumulated data sets at A and B. But entanglement experiments are designed and executed to produce statistical dependence vis the pairing process.

There's no way around this unless you devise a model that can actually predict individual detections.

Or you could reason your way around the difficulty by noticing that the hidden variable (ie., the specific quality of the emission that might cause enough of it to be transmitted by the polarizer to register a detection) is irrelevant wrt the rate of joint detection (the only thing that matters wrt joint detection is the relationship, presumably produced via simultaneous emission, between the two opposite-moving disturbances) . Thus preserving the idea that the correlations are due to local interactions/transmissions, while at the same time modelling the joint state in a nonseparable form. Of course, then you wouldn't have an explicitly local, explicitly hidden variable model, but rather something along the lines of standard qm.

 Quote by ThomasT Yes, of course, the presumed existence of hidden variables comes with the assumption of determinism. I don't think anybody would deny that hidden variables exist.
Well, if you assume D then you must be including local hidden variables. Therefore you're rejecting both Bell's Theorem and the Heisenberg uncertainty principle. Moreover, I guess quantum fluctuations at the Planck scale could not be random either.

My reductio ad absurdum argument was based on thermodynamics which at the theoretical level is based on probabilities. If a system can only exist in one possible state and only transit into one other possible state, there is no Markov process. All states exist with p=1 or p=0. (past. present, and future). Under D, probabilities can only reflect our uncertainty. If you plug 0 or 1 into the Gibbs equation, you get positive infinity or 0. Any values in between (under D) are merely reflections of our uncertainty. Yet we can actually measure finite non zero values of entropy in experiments (defined as Q/T or heat/temp). Such results cannot be only reflections of our uncertainty. Remember, there is no statistical independence under D.

None of this either proves or disproves D. I don't think it can be done. It seems to be essentially a metaphysical issue. However, it seems to me (I'm not a physicist) like you have give up a lot to assume D at all scales.

 Quote by SW VandeCarr Well, if you assume D then you must be including local hidden variables. Therefore you're rejecting both Bell's Theorem and the Heisenberg uncertainty principle. Moreover, I guess quantum fluctuations at the Planck scale could not be random either.
Yes, I'm including local hidden variables. Bell's analysis has to do with the formal requirements of lhv models of entangled states, not with what might or might not exist in an underlying quantum reality. The HUP has to do with the relationship between measurements on canonically conjugate variables. The product of the statistical spreads is equal to or greater than Planck's constant. Quantum fluctuations come from an application of the HUP. None of this tells us whether or not there is an underlying quantum reality. I would suppose that most everybody believes there is. It also doesn't tell us whether Nature is local or nonlocal. So, the standard assumption is that it's local.

 Quote by SW VandeCarr None of this either proves or disproves D. I don't think it can be done.
I agree.

 Quote by SW VandeCarr It seems to be essentially a metaphysical issue.
I suppose so, but not entirely insofar as metaphysical constructions can be evaluated wrt our observations of Nature. And I don't think that one has to give up anything that's accepted as standard mainstream physical science to believe in a locally deterministic underlying reality.

 Quote by kote Local causation stemming from real classical particles and waves has been falsified by experiments. EPRB type experiments are particularly illustrative of this fact.
These are formal issues. Not matters of fact about what is or isn't true about an underlying reality.

 Quote by kote If there is evidence of deep reality being deterministic, I would like to know what it is :).
It's all around you. Order and predictability is the rule in physical science, not the exception. The deterministic nature of things is apparent on many levels, even wrt quantum experimental phenomena. Some things are impossible to predict, but, in general, things are not observed to happen independently of antecedent events. The most recent past (the present) is only slightly different from 1 second before. Take a movie of any physical process that you can visually track and look at it frame by frame.

There isn't any compelling reason to believe that there aren't any fundamental deterministic dynamics governing the evolution of our universe, or that the dynamics of waves in media is essentially different wrt any scale of behavior. In fact, quantum theory incorporates lots of classical concepts and analogs.

 Quote by kote As for the universe being locally deterministic, this has been proven impossible.
This is just wrong. Where did you get this from?

Anyway, maybe you should start a new thread here in the philosophy forum on induction and/or determinism. I wouldn't mind discussing it further, but I don't think we're helping ueit wrt the thread topic.

 =ThomasT;I suppose so, but not entirely insofar as metaphysical constructions can be evaluated wrt our observations of Nature. And I don't think that one has to give up anything that's accepted as standard mainstream physical science to believe in a locally deterministic underlying reality.
I think we may have to give up more if we want D. You didn't address my thermodynamic argument. Entropy is indeed a measure of our uncertainty regarding the state of a system. We already agreed that our uncertainty has nothing to do with nature. Yet how is it that we can measure entropy as the relation Q/T? The following shows how we can derive the direct measure of entropy from first principles (courtesy of Count Iblis):

http://en.wikipedia.org/wiki/Fundame...rst_principles

The assumption behind this derivation is that the position and momentum of each of N particles (atoms or molecules) in a gas are uncorrelated (statistically independent). However D doesn't allow for statistical independence. Under D the position and momentum of each particle at any point in time is predetermined. Therefore there is full correlation of the momenta of all the particles.

What is actually happening when the experimenter heats the gas and observes a change in the Q/T relation (entropy increases)? Under D the whole experiment is a predetermined scenario with the actions of the experimenter included. The experimenter didn't decide to heat the gas or even set up the experiment. The experimenter had no choice. She or he is an actor following the deterministic script. Everything is correlated with everything else with measure one. There really is no cause and effect. There is only a the predetermined succession of states. Therefore you're going to have to give up the usual (strong) form of causality where we can perform experimental interventions to test causality (if you want D).

Causality is not defined in mathematics or logic. It's usually defined operationally where, given A is the necessary, sufficient and sole cause of B, if you remove A, then B cannot occur. Well under D we cannot remove A unless it was predetermined that p(B)=0. At best, we can have a weak causality where we observe a succession of states that are inevitable.

 Quote by kote ueit, Statistics are calculated mathematically from individual measurements. They are aggregate observations about likelihoods. Determinism deals with absolute rational causation. It would be an inductive fallacy to say that statistics can tell us anything about the basic mechanisms of natural causation.
There is no fallacy here. One may ask what deterministic models could fit the statistical data. If you are lucky you may falsify some of them and find the "true" one. There is no guarantee of success but there is no fallacy either.

 Linguistically, we may use statistics as 2nd order explanations in statements of cause and effect, but it is understood that statistical explanations never represent true causation. If determinism exists, there must necessarily be some independent, sufficient, underlying cause - some mechanism of causation.
I don't understand the meaning of "independent" cause. Independent from what? Most probable, the "cause" is just the state of the universe in the past.

 Because of the problem of induction, no irreducible superdeterministic explanation can prove anything about the first order causes and effects that basic physics is concerned with.
No absolute proof is possible in science and I do not see any problem with that. Finding a SD mechanism behind QM could lead to new physics and I find this interesting.

 The very concept of locality necessarily implies classical, billiard-ball style, momentum transfer causation. The experiments of quantum mechanics have conclusively falsified this model.
This is false. General relativity or classical electrodynamics are local theories, yet they are not based on the billiard-ball concept but on fields.

 Quote by ThomasT Lhv formalisms of quantum entangled states are ruled out -- not the possible existence of lhv's. As things stand now, there's no conclusive argument for either locality or nonlocality in Nature. But the available physical evidence suggests that Nature behaves deterministically according to the principle of local causation. You've already agreed that the method of changing the polarizer settings, as well as whether or not they're changed while the emissions are in flight, incident on the polarizers, is irrelevant to the rate of joint detection. The reason that Bell inequalities are violated has to do with the formal requirements due to the assumption of locality. This formal requirement also entails statistical independence of the accumulated data sets at A and B. But entanglement experiments are designed and executed to produce statistical dependence vis the pairing process. There's no way around this unless you devise a model that can actually predict individual detections. Or you could reason your way around the difficulty by noticing that the hidden variable (ie., the specific quality of the emission that might cause enough of it to be transmitted by the polarizer to register a detection) is irrelevant wrt the rate of joint detection (the only thing that matters wrt joint detection is the relationship, presumably produced via simultaneous emission, between the two opposite-moving disturbances) . Thus preserving the idea that the correlations are due to local interactions/transmissions, while at the same time modelling the joint state in a nonseparable form. Of course, then you wouldn't have an explicitly local, explicitly hidden variable model, but rather something along the lines of standard qm.
I think I should better explain what I think it does happen in an EPR experiment.

1. At source location, the field is a function of the detectors' state. Because the model is local this information is "old". If the detectors are at 1 ly away, then the source "knows" the detectors' state as it was 1 year in the past.

2. From this available information and the deterministic evolution law the source "computes" the future state of the detectors when the particles arrive there.

3. The actual spin of the particles is set at the moment of emission and does not change on flight.

4. The correlations are a direct result of the way the source "chooses" the spins of the entangled particles. It so happens that this "choice" follows Malus's law.

In conclusion, changing the detectors before detection has no relevance on the experimental results because these changes are taken into account when the source "decides" the particles' spin. Bell's inequality is based on the assumption that the hidden variable that determines the particle spin is not related to the way the detectors are positioned. The above model denies this. Both the position of the detector and the spin of the particle are a direct result of the past field configuration.

 Quote by SW VandeCarr The assumption behind this derivation is that the position and momentum of each of N particles (atoms or molecules) in a gas are uncorrelated (statistically independent). However D doesn't allow for statistical independence. Under D the position and momentum of each particle at any point in time is predetermined. Therefore there is full correlation of the momenta of all the particles.
The trajectory of the particle is a function of the field produced by all other particles in the universe, therefore D does not require a strong correlation between the particles included in the experiment. Also I do not see the relevance of predetermination to the issue of statistical independence. The digits of Pi are strictly determined, yet no correlation exists between them. What you need for the entropy law to work is not absolute randomness but pseudorandomness.

 Quote by ueit The trajectory of the particle is a function of the field produced by all other particles in the universe, therefore D does not require a strong correlation between the particles included in the experiment. Also I do not see the relevance of predetermination to the issue of statistical independence. The digits of Pi are strictly determined, yet no correlation exists between them. What you need for the entropy law to work is not absolute randomness but pseudorandomness.
Correlation is the degree of correspondence between two random variables. There are no random variables involved in the computation of pi.

Under D, probabilities only reflect our uncertainty. They have nothing to do with nature (as distinct from ourselves). Statistical independence is an assumption based on our uncertainty. Ten fair coin tosses are assumed to be statistically independent based on our uncertainty of the outcome. We imagine there are 1024 possible outcomes, Under D there is only one possible outcome and if we had perfect information we could know that outcome.

Under D, not only is the past invariant, but the future is also invariant. If we had perfect information the future would be as predictable as the past is "predictable". It's widely accepted that completed events have no information value (ie p=1) and that information only exists under conditions of our uncertainty.

I agree that with pseudorandomness the thermodynamic laws work, but only because of our uncertainty given we lack the perfect information which could be available (in principle) under D.

EDIT: When correlation ($$R^{2}$$) is unity, it is no longer probabilistic in that no particles move independently of any other. Under D all particle positions and momenta are predetermined. If a full description of particle/field states is in principle knowable in the past, it is knowable in future under D.

 Quote by SW VandeCarr What is actually happening when the experimenter heats the gas and observes a change in the Q/T relation (entropy increases)? Under D the whole experiment is a predetermined scenario with the actions of the experimenter included. The experimenter didn't decide to heat the gas or even set up the experiment. The experimenter had no choice. She or he is an actor following the deterministic script. Everything is correlated with everything else with measure one. There really is no cause and effect. There is only a the predetermined succession of states. Therefore you're going to have to give up the usual (strong) form of causality where we can perform experimental interventions to test causality (if you want D). Causality is not defined in mathematics or logic. It's usually defined operationally where, given A is the necessary, sufficient and sole cause of B, if you remove A, then B cannot occur. Well under D we cannot remove A unless it was predetermined that p(B)=0. At best, we can have a weak causality where we observe a succession of states that are inevitable.
The assumption of determinism and the application of probabilities are independent considerations.

I wouldn't separate causality into strong and weak types. We observe invariant relationships, or predictable event chains, or, as you say, "a succession of states that are inevitable". Cause and effect are evident at the macroscopic scale.

Determinism is the assumption that there are fundamental dynamical rules governing the evolution of any physical state or spatial configuration. We already agreed that it can't be disproven.

The distinguishing characteristic of ueit's proposal isn't that it's deterministic. What sets it apart is that it involves an infinite field of nondiminishing strength centered on polarizer or other filtration/detection devices and/or device combinations and propagating info at c to emission devices thereby determining the time and type of emission, etc., etc. So far, it doesn't make much sense to me.

We already have a way of looking at these experiments which allows for an implicit, if not explicit, local causal view.

Anyway, his main question about arguments against the assumption of determinism has been answered, and I thought we agreed on this -- there aren't any good ones.

 Quote by ThomasT Anyway, his main question about arguments against the assumption of determinism has been answered, and I thought we agreed on this -- there aren't any good ones.
Of course you can't disprove or really even argue against metaphysical assumptions (except with other metaphysical assumptions). Nature appears effectively deterministic at macro-scales if we disregard human intervention and human activity in general. At quantum scales, it remains to be proven that hidden variables exist. (Afaik, there is no real evidence for hidden variables).Therefore strict (as opposed to effective) determinism remains a matter of taste. In any case, to the extent that science uses probabilistic reasoning, science is not based de facto on strict determinism. Thermodynamics is based almost entirely on probabilistic reasoning. Quantum mechanics is deterministic only insofar as probabilities are determined and confirmed by experiment.

(Note: I'm using "effective determinism" in terms of what we actually observe within the limits of measurement, and "strict determinism" as a philosophical paradigm.)