Arguments Against Superdeterminism

In summary, the conversation discusses the concept of superdeterminism and its potential implications in the context of Bell's Theorem. The speaker argues that there are examples in physics where distant objects exhibit correlations, and that superdeterminism is often dismissed without clear arguments against it. They also discuss the idea of a deterministic universe and the absence of free will, and how this may relate to the existence of objects such as cell phones. The conversation also touches on the relationship between quantum and classical behavior and the possibility of a Theory of Everything.
  • #71
ThomasT said:
Yes, it's our uncertanties that probabilities quantfy. We assume an underlying determinism (for many good reasons). But we don't know the details of that underlying determinism. Hence, the need for probabilistic descriptiions.

ueit asks for arguments against determinism. Afaik, there aren't any -- at least no definitive ones. And, as far as I can tell from this thread you haven't given any.

But, ueit's proposed explanation for Bell-type correlations is problematic for reasons that I've given.

So, where are we?

I think you're trying to have it both ways. First, just to be clear, we need to quantify uncertainty. Uncertainty is a function of a probability: U=4(p(1-p)) where 4 just scales the measure to the interval [0,1]. It's clear uncertainty is maximal when p=0.5 and 0 when p=0
or p=1.

Now you've already agreed that probability measures our uncertainty. What does our uncertainty have to do with nature? Someone tosses a fair coin behind a curtain. The coin is tossed but you don't see it. For the "tosser" uncertainty is 0. For you, it's 1.

Now we have deterministic theories that are not time dependent. The laws of physics are presumed to hold in the future just as in the past. The charge of an electron does not change with time. If we have determinism (D), it's clear that any outcome of a time dependent process is also entirely predictable in principle. That means randomness is only a reflection of our current state of knowledge. If you could have perfect information as to some future time dependent outcome, you have U=0. This corresponds to p=1 or p=0. This is what I meant when I said that under D, with perfect information, future events occur with a real probability 1 (or 0). In effect under D, we don't have probabilities in nature. We only have our uncertainty about nature.
 
Physics news on Phys.org
  • #72
SW VandeCarr said:
I think you're trying to have it both ways.
Trying to have what both ways?

SW VandeCarr said:
First, just to be clear, we need to quantify uncertainty. Uncertainty is a function of a probability: U=4(p(1-p)) where 4 just scales the measure to the interval [0,1]. It's clear uncertainty is maximal when p=0.5 and 0 when p=0 or p=1.
The probability already quantifies the uncertainty, doesn't it?

SW VandeCarr said:
Now you've already agreed that probability measures our uncertainty.
I agreed that probabilities are quantitative expressions of our uncertainties.

SW VandeCarr said:
What does our uncertainty have to do with nature?
That's what I was wondering when you said ...
SW VandeCarr said:
They (probabilities) would be objective only if we assume that nature is fundamentally probabilistic and true randomness actually exists.
... and I pointed out that probabilities are objective when they're based on observable possibilities, and that randomness refers to our observations, not the underlying reality.

SW VandeCarr said:
Someone tosses a fair coin behind a curtain. The coin is tossed but you don't see it. For the "tosser" uncertainty is 0. For you, it's 1.
OK.

SW VandeCarr said:
Now we have deterministic theories that are not time dependent. The laws of physics are presumed to hold in the future just as in the past. The charge of an electron does not change with time. If we have determinism (D), it's clear that any outcome of a time dependent process is also entirely predictable in principle.
OK, until we get to quantum stuff where that pesky quantum of action becomes significant.

SW VandeCarr said:
That means randomness is only a reflection of our current state of knowledge.
I agree.

SW VandeCarr said:
If you could have perfect information as to some future time dependent outcome, you have U=0. This corresponds to p=1 or p=0. This is what I meant when I said that under D, with perfect information, future events occur with a real probability 1 (or 0).
Of course, an event either has occurred or it hasn't.

SW VandeCarr said:
In effect under D, we don't have probabilities in nature. We only have our uncertainty about nature.
Ok, so I guess we agree on this. So far, wrt ueit's OP we don't have any good argument(s) against the assumption of determinism.

What about a field associated with filtering-measuring devices in optical Bell tests, whose strength doesn't diminish with distance, and whose values determine or trigger emissions? (Lets call it the Ueit Field.) Any arguments against that?
 
  • #73
ThomasT said:
So far, wrt ueit's OP we don't have any good argument(s) against the assumption of determinism.

Classical objective causation, the Newtonian billiard-ball style particle collision causation, has been disproven by quantum mechanical experiments. Historically, this conception of classical physical causation is what was meant by determinism in nature. If that's what we mean now (but I don't think it is) then the conversation is over.

Quantum mechanics also suggests that there are forces in the universe that are inherently unknowable. We can only directly measure the location of a particle on our detector, we can't directly measure the quantum forces that caused it to end up at that specific location instead of another. If this aspect of QM is true, then deterministic natural laws, if they exist, are inherently unknowable themselves (and you can further argue whether or not an unfalsifiable physical construct can in principle be considered to exist at all).

Of course, the problem of induction already disallows for proof of physical determinism.

Spinoza and others have argued that physical determinism is rationally guaranteed by the laws of logic. Arguments from logic are the only type that have any hope of proving determinism in nature.

On a practical level it is necessary to assume determinism in nature, and on a macro level things seem to be pretty deterministic. However, on the quantum level where this causation is supposed to be taking place, we have absolutely no idea how anything actually works. We cannot visualize any sort of causation on the quantum level, and we cannot come close to predicting any results.

An assumption of determinism stems from a macroscopic point of view that presupposes a quantum-scale mechanism for physical causation. However, when you actually look at the quantum level, no evidence for any such mechanism can be found, and it has been argued that no such mechanism could exist at all.
 
  • #74
ThomasT said:
The probability already quantifies the uncertainty, doesn't it?

Not really. You approach certainty as as you move toward p=0 or p=1 from p=0.5. Uncertainty is maximal at p=0.5 Look at the equation.

OK, until we get to quantum stuff where that pesky quantum of action becomes significant.

Well, D doesn't exclude anything. It would seem you're going to have to accept hidden variables if you want D or settle for a universe that is truly random at it's smallest scales. That's what I mean by trying to having it both ways.

What about a field associated with filtering-measuring devices in optical Bell tests, whose strength doesn't diminish with distance, and whose values determine or trigger emissions? (Lets call it the Ueit Field.) Any arguments against that?

I'm not a physicist and I don't feel I'm qualified to venture an opinion on that. I feel I am qualified enough in statistics and probability to venture an opinion in this area (in a frequentist or Bayesian sense) both in the methods and the limitations of these methods.
I did just offer a tentative opinion re hidden variables. You can tell me whether I'm correct or not
 
  • #75
Oh, right, this thread is on superdeterminism, not determinism :).

As for superdeterminism... the only way it can be logically explained is if one of your causes is superfluous. Superdeterminism is: A and B each independently caused C.

So, what caused C? Answer: A or B, take your pick.

Logically this is impossible unless A and B are actually the same thing but are being described two separate ways. In essence, superdeterminism is just straight determinism with the addition of superfluous 2nd order explanations.

Mental activity is often ascribed such 2nd order explanatory power over physical events. But the fact that physical events (assuming we have deterministic natural laws) can be explained entirely in physical terms doesn't prove that logic is being violated and superdeterminism is right, it just proves that mental activity at some level must be reducible to physical activity (ie our brains).

Bottom line: superdeterminism violates logic. It's logically impossible in its true sense. Additionally, 2nd order explanations from properties emergent on 1st order explanations are interesting, but they don't solve any problems in basic physics, which is only concerned with 1st order causation.
 
Last edited:
  • #76
kote said:
Oh, right, this thread is on superdeterminism, not determinism :).

The OP has agreed (as per ThomasT) that SD and D are the same.
 
  • #77
SW VandeCarr said:
You approach certainty as as you move toward p=0 or p=1 from p=0.5. Uncertainty is maximal at p=0.5.
That seems pretty quantitative to me. :smile: Your equation just quantifies it in a different way.

SW VandeCarr said:
Well, D doesn't exclude anything. It would seem you're going to have to accept hidden variables if you want D or settle for a universe that is truly random at it's smallest scales. That's what I mean by trying to having it both ways.
Yes, of course, the presumed existence of hidden variables comes with the assumption of determinism. I don't think anybody would deny that hidden variables exist.

Assuming determinism, then randomness exists only at the instrumental level, the level of our sensory apprehension.
 
  • #78
kote said:
Oh, right, this thread is on superdeterminism, not determinism :).

As for superdeterminism... the only way it can be logically explained is if one of your causes is superfluous. Superdeterminism is: A and B each independently caused C.

So, what caused C? Answer: A or B, take your pick.

Logically this is impossible unless A and B are actually the same thing but are being described two separate ways. In essence, superdeterminism is just straight determinism with the addition of superfluous 2nd order explanations.

Mental activity is often ascribed such 2nd order explanatory power over physical events. But the fact that physical events (assuming we have deterministic natural laws) can be explained entirely in physical terms doesn't prove that logic is being violated and superdeterminism is right, it just proves that mental activity at some level must be reducible to physical activity (ie our brains).

Bottom line: superdeterminism violates logic. It's logically impossible in its true sense. Additionally, 2nd order explanations from properties emergent on 1st order explanations are interesting, but they don't solve any problems in basic physics, which is only concerned with 1st order causation.

Is this the standard definition of superdeterminism?
 
  • #79
ThomasT said:
Is this the standard definition of superdeterminism?

It's slightly more nuanced, but basically, yes. Superdeterminism is a second level of causation on top of one cause per effect causation. It is not necessarily true that the second level is reducible to the first level in any of its individual parts, but as a system, the second level of explanation (cause) emerges fully from the first level. Additionally, in superdeterminism, the first order explanation is always necessary and sufficient on its own.

For example, you may not find the sensation of pain anywhere in the micro level atomic world, but the sensation of pain may still emerge as a unique property from the correct systemic arrangement of atoms in my brain along with corresponding atomic impulses in my nerves etc.

This second order of properties that emerges from the first order properties of the atoms can be used in causal statements. "The pain caused me to pull my hand back from the fire." This, however, is a "super-" cause, emerging from the first order cause (or explanation) involving atoms bumping into each other according to the laws of nature at the micro level.

Edit: Now that I think about it, I may have put a little too much interpretation in my explanation. Superdeterminism just means events can have more than one immediate and sufficient cause. However, I think it's pretty trivial to conclude that this is logically impossible unless all causes are in essence the same, in which case you get what I just described.
 
Last edited:
  • #80
kote said:
It's slightly more nuanced, but basically, yes. Superdeterminism is a second level of causation on top of one cause per effect causation. It is not necessarily true that the second level is reducible to the first level in any of its individual parts, but as a system, the second level of explanation (cause) emerges fully from the first level. Additionally, in superdeterminism, the first order explanation is always necessary and sufficient on its own.

For example, you may not find the sensation of pain anywhere in the micro level atomic world, but the sensation of pain may still emerge as a unique property from the correct systemic arrangement of atoms in my brain along with corresponding atomic impulses in my nerves etc.

This second order of properties that emerges from the first order properties of the atoms can be used in causal statements. "The pain caused me to pull my hand back from the fire." This, however, is a "super-" cause, emerging from the first order cause (or explanation) involving atoms bumping into each other according to the laws of nature at the micro level.

Edit: Now that I think about it, I may have put a little too much interpretation in my explanation. Superdeterminism just means events can have more than one immediate and sufficient cause. However, I think it's pretty trivial to conclude that this is logically impossible unless all causes are in essence the same, in which case you get what I just described.

So superdeterminism just refers to the principles and laws governing higher order physical regimes?
 
  • #81
ThomasT said:
So superdeterminism just refers to the principles and laws governing higher order physical regimes?

It doesn't necessarily have to do with the physical realm at all. Perhaps everything is in our minds and mental causes and effects are basic to reality. We can even talk about causation in a made up universe with its own rules. Superdeterminism is about logical laws of causation. It's a mode of explanation positing overcausation.

Superdeterminism: One effect can have multiple immediate and sufficient causes.

...but read above for some of how this can and can't work :)
 
  • #82
kote, I think you've lost me, so I'm going to go back to your post #73 (on determinism) and work my way to your last post on superdeterminism, nitpicking as I go.

kote said:
Classical objective causation, the Newtonian billiard-ball style particle collision causation, has been disproven by quantum mechanical experiments.
Not disproven, but supplanted by qm wrt certain applications. Classical physics is used in a wide variety of applications. Sometimes, in semi-classical accounts, part of a system is treated classically and the other part quantum mechanically.

kote said:
Historically, this conception of classical physical causation is what was meant by determinism in nature. If that's what we mean now (but I don't think it is) then the conversation is over.
I think that's pretty much what's meant -- along the lines of an underlying deterministic wave mechanics. But the conversation isn't over, because this view is reinforced by qm and experiments, not disproven.

kote said:
Quantum mechanics also suggests that there are forces in the universe that are inherently unknowable. We can only directly measure the location of a particle on our detector, we can't directly measure the quantum forces that caused it to end up at that specific location instead of another. If this aspect of QM is true, then deterministic natural laws, if they exist, are inherently unknowable themselves (and you can further argue whether or not an unfalsifiable physical construct can in principle be considered to exist at all).
Yes, quantum theory, at least wrt the standard interpretation, does place limits on what we can know or unambiguously, objectively say about our universe. The assumption of determinism is just that, an assumption. It's based on what we do know about our universe. It places limits on the observations that might result from certain antecedent conditions. It's falsifiable in the sense that if something fantastically different from what we expect vis induction were to be observed, then our current notion of causal determinism would have to be trashed.

kote said:
Of course, the problem of induction already disallows for proof of physical determinism.
Yes, it can't be proven. Just reinforced by observations. Our windows on the underlying reality of our universe are small and very foggy. However, what is known suggests that the deep reality is deterministic and locally causal.

Induction is justified by its continued practical utility. A general understanding for why induction works at all begins with assumption of determinism.

kote said:
On a practical level it is necessary to assume determinism in nature, and on a macro level things seem to be pretty deterministic. However, on the quantum level where this causation is supposed to be taking place, we have absolutely no idea how anything actually works. We cannot visualize any sort of causation on the quantum level, and we cannot come close to predicting any results.
It's almost that bleak, but maybe not quite. There's some idea of how some things work. There are indications that the deep reality of our universe is essentially wave mechanical. But try efficiently modelling some process or other exclusively in those terms.

kote said:
An assumption of determinism stems from a macroscopic point of view that presupposes a quantum-scale mechanism for physical causation.
Ok.

kote said:
However, when you actually look at the quantum level, no evidence for any such mechanism can be found, and it has been argued that no such mechanism could exist at all.
As you indicated earlier, we can't actually look at the quantum level, but the assumption of determinism is kept because there is evidence that there are quantum-scale mechanisms for physical causation.
 
  • #83
ThomasT said:
Induction is justified by its continued practical utility.

That's 'justifying' induction via induction. Using induction may be practical, even unavoidable, but that sort of 'justification' reduces to a common-sense circular argument, nothing more. And common sense is notoriously unreliable.

Induction is only useful when it works, and its actually a classic example of an 'unjustified' assumption. Nothing wrong with that really, but it places limits on its utility.
 
  • #84
kote, before I begin nitpicking again, let me ask -- are you saying that superdeterminism refers to the fact that there are different levels of explanation, ie., either in terms of underlying dynamics or in terms of phenomena which emerge from those underlying dynamics?

That's what you seem to be saying by "superdeterminism: one effect can have multiple immediate and sufficient causes", and in your elaboration on that.

Or, are you saying that the higher order explanation might be referred to as superdeterministic?

If either, I don't think that that's what ueit meant to convey in using the term.

kote said:
In essence, superdeterminism is just straight determinism with the addition of superfluous 2nd order explanations.
A higher order explanation isn't necessarily superfluous, though it might, in a certain sense, be considered as such if there exists a viable lower order explanation for the same phenomenon.

In most cases, we have presumably higher order explanations in lieu of discovering lower order explanations that are presumed to exist vis the assumption of an underlying deterministic dynamics.

Anyway, since there are multiple levels of explanation, your statement above would seem to reduce to superdeterminism = determinism. Which is the conclusion we came to earlier in this thread. In other words, the term superdeterminism is superfluous.
 
  • #85
JoeDawg said:
That's 'justifying' induction via induction. Using induction may be practical, even unavoidable, but that sort of 'justification' reduces to a common-sense circular argument, nothing more.
We can begin to try to understand the deep reason for why induction works by positing that the underlying reality of our universe is causally deterministic, but that's just a vague precursor to a detailed answer to the question in terms of fundamental dynamics.

The fact that we continue to behave inductively can, at the level of our behavior, be understood as being due to the fact that it usually works.

JoeDawg said:
And common sense is notoriously unreliable.
If it were that unreliable, it wouldn't be common.

JoeDawg said:
Induction is only useful when it works ...
It usually does work.

JoeDawg said:
... and its actually a classic example of an 'unjustified' assumption. Nothing wrong with that really, but it places limits on its utility.
We can treat induction as an assumption, or as a method of reasoning, but it's more generally a basic behavioral characteristic. We behave inductively. It's part of our common behavioral heritage, our common sense.
 
  • #86
ThomasT said:
If it were that unreliable, it wouldn't be common.

Either unreliable or (not?) so often used...
 
  • #87
ThomasT said:
The fact that we continue to behave inductively can, at the level of our behavior, be understood as being due to the fact that it usually works.
worked, in the past.

Induction reasons from observed to unobserved.
You are reasoning from observed instances, in the past, where induction worked, to as yet unobserved instances in the future.
You're using induction to justify your belief in induction.

We can treat induction as an assumption, or as a method of reasoning, but it's more generally a basic behavioral characteristic. We behave inductively. It's part of our common behavioral heritage, our common sense.
Sure, we behave irrationally all the time.
Using induction involves an unjustified assumption.
That doesn't mean, once we make the assumption we can't proceed rationally.
As Hume said, induction is mere habit.

We can, of course, use it rationally, but we can't justify its usage.
 
  • #88
ThomasT said:
Using the standard referents for emitter, polarizer, and detector, in a simple optical Bell test setup involving emitter, 2 polarizers, and 2 detectors it's pretty easy to demonstrate that the polarizer settings aren't determined by the detector settings, or by the emitter, or by anything else in the design protocol except "whatever is used to change" the polarizer settings.

That may be true if you only look at the macroscopic description of the detector/emitter/etc. We do not know if the motion of particles in these objects exert an influence on the motion of particles in other objects because we only have a statistical description. The individual trajectories might be correlated even if the average force between macroscopic objects remains null.

It's been demonstrated that the method that's used to change the polarizer settings, and whether it's a randomized process or not, isn't important wrt joint detection rate. What is important is the settings that are associated with the detection attributes via the pairing process -- not how the settings themselves were generated.

I agree but this is irrelevant. Nothing has been demonstrated regarding the individual results obtained.

It's already well established that detector orientations don't trigger emissions

How?

-- and changing the settings while the emissions are in flight has no observable effect on the correlations.

I wouldn't expect that anyway.

If you want to say that these in-flight changes are having some (hidden) effect, then either there are some sort of nonlocal hidden variable(s) involved, or, as you suggest, there's some sort of heretofor unknown, and undetectable, local field that's determining the correlations. Your suggestion seems as contrived as the nonlocal models -- as well as somewhat incoherent wrt what's already known (ie., wrt working models).

The field is the "hidden variable", together with particles' positions, so, in this sense, is not known. However, if this field determines particles' motions it has to appear, on a statistical level as the EM, weak, color field. The question is if such a field can be formulated. Unfortunately I cannot do it myself as I don't have the required skills but I wonder if it could be achieved, or it is mathematically impossible. Now, in the absence of a mathematical formulation it is premature to say that it must be contrived, or that the hypothesis is not falsifiable.

I still don't know what the distinguishing characteristics of a superdeterministic theory are. Can you give a general definition of superdeterminism that differentiates it from determinism? If not, then you're OP is just asking for (conclusive or definitive) arguments against the assumption of determinism. There aren't any. So, the assumption that the deep reality of Nature is deterministic remains the defacto standard assumption underlying all physical science. It isn't mentioned simply because it doesn't have to be. It's generally taken for granted -- not dismissed.

What I am interested is local determinism, as non-local determinism is clearly possible (Bohm's interpretation). There are many voices that say that it has been ruled out by EPR experiments. I am interested in seeing if any strong arguments can be put forward against the use of SD loophole (denying the statistical independence between emitter and detector) to reestablish local deterministic theories as a reasonable research object.
 
  • #89
ueit said:
What I am interested is local determinism, as non-local determinism is clearly possible (Bohm's interpretation). There are many voices that say that it has been ruled out by EPR experiments. I am interested in seeing if any strong arguments can be put forward against the use of SD loophole (denying the statistical independence between emitter and detector) to reestablish local deterministic theories as a reasonable research object.

ueit,

Statistics are calculated mathematically from individual measurements. They are aggregate observations about likelihoods. Determinism deals with absolute rational causation. It would be an inductive fallacy to say that statistics can tell us anything about the basic mechanisms of natural causation.

Linguistically, we may use statistics as 2nd order explanations in statements of cause and effect, but it is understood that statistical explanations never represent true causation. If determinism exists, there must necessarily be some independent, sufficient, underlying cause - some mechanism of causation.

Because of the problem of induction, no irreducible superdeterministic explanation can prove anything about the first order causes and effects that basic physics is concerned with.

The very concept of locality necessarily implies classical, billiard-ball style, momentum transfer causation. The experiments of quantum mechanics have conclusively falsified this model.
 
Last edited:
  • #90
ThomasT said:
Not disproven, but supplanted by qm wrt certain applications. Classical physics is used in a wide variety of applications. Sometimes, in semi-classical accounts, part of a system is treated classically and the other part quantum mechanically.

ThomasT, I can treat my sister as if she's my aunt. That doesn't make it true :). Local causation stemming from real classical particles and waves has been falsified by experiments. EPRB type experiments are particularly illustrative of this fact.

Yes, it can't be proven. Just reinforced by observations. Our windows on the underlying reality of our universe are small and very foggy. However, what is known suggests that the deep reality is deterministic and locally causal.

If there is evidence of deep reality being deterministic, I would like to know what it is :). As for the universe being locally deterministic, this has been proven impossible. See above.

Induction is justified by its continued practical utility. A general understanding for why induction works at all begins with assumption of determinism.

So we're supporting determinism by assuming determinism?

As you indicated earlier, we can't actually look at the quantum level, but the assumption of determinism is kept because there is evidence that there are quantum-scale mechanisms for physical causation.

If the evidence is inductive, then since you claim induction relies on an assumption of determinism itself, there is no evidence at all. I'm not denying the idea that there could be evidence for basic determinism, but the only evidence I've seen proposed here so far has been ethical. It has been assumptions about what we should believe and what's practical, rather than what we can know or what's true.
 
Last edited:
  • #91
ThomasT said:
A higher order explanation isn't necessarily superfluous, though it might, in a certain sense, be considered as such if there exists a viable lower order explanation for the same phenomenon.

ThomasT,

If there is no viable lower order explanation then by definition you aren't dealing with a higher order explanation. Higher order explanations, as such, are not necessary, and unless they are reducible to first order explanations, they cannot be sufficient either.

Basically, they aren't true causes (or explanations) at all.
 
  • #92
ueit said:
What I am interested is local determinism, as non-local determinism is clearly possible (Bohm's interpretation). There are many voices that say that it has been ruled out by EPR experiments.
Lhv formalisms of quantum entangled states are ruled out -- not the possible existence of lhv's. As things stand now, there's no conclusive argument for either locality or nonlocality in Nature. But the available physical evidence suggests that Nature behaves deterministically according to the principle of local causation.

ueit said:
I am interested in seeing if any strong arguments can be put forward against the use of SD loophole (denying the statistical independence between emitter and detector) to reestablish local deterministic theories as a reasonable research object.
You've already agreed that the method of changing the polarizer settings, as well as whether or not they're changed while the emissions are in flight, incident on the polarizers, is irrelevant to the rate of joint detection.

The reason that Bell inequalities are violated has to do with the formal requirements due to the assumption of locality. This formal requirement also entails statistical independence of the accumulated data sets at A and B. But entanglement experiments are designed and executed to produce statistical dependence vis the pairing process.

There's no way around this unless you devise a model that can actually predict individual detections.

Or you could reason your way around the difficulty by noticing that the hidden variable (ie., the specific quality of the emission that might cause enough of it to be transmitted by the polarizer to register a detection) is irrelevant wrt the rate of joint detection (the only thing that matters wrt joint detection is the relationship, presumably produced via simultaneous emission, between the two opposite-moving disturbances) . Thus preserving the idea that the correlations are due to local interactions/transmissions, while at the same time modelling the joint state in a nonseparable form. Of course, then you wouldn't have an explicitly local, explicitly hidden variable model, but rather something along the lines of standard qm.
 
Last edited:
  • #93
ThomasT said:
Yes, of course, the presumed existence of hidden variables comes with the assumption of determinism. I don't think anybody would deny that hidden variables exist.

Well, if you assume D then you must be including local hidden variables. Therefore you're rejecting both Bell's Theorem and the Heisenberg uncertainty principle. Moreover, I guess quantum fluctuations at the Planck scale could not be random either.

My reductio ad absurdum argument was based on thermodynamics which at the theoretical level is based on probabilities. If a system can only exist in one possible state and only transit into one other possible state, there is no Markov process. All states exist with p=1 or p=0. (past. present, and future). Under D, probabilities can only reflect our uncertainty. If you plug 0 or 1 into the Gibbs equation, you get positive infinity or 0. Any values in between (under D) are merely reflections of our uncertainty. Yet we can actually measure finite non zero values of entropy in experiments (defined as Q/T or heat/temp). Such results cannot be only reflections of our uncertainty. Remember, there is no statistical independence under D.

None of this either proves or disproves D. I don't think it can be done. It seems to be essentially a metaphysical issue. However, it seems to me (I'm not a physicist) like you have give up a lot to assume D at all scales.
 
Last edited:
  • #94
SW VandeCarr said:
Well, if you assume D then you must be including local hidden variables. Therefore you're rejecting both Bell's Theorem and the Heisenberg uncertainty principle. Moreover, I guess quantum fluctuations at the Planck scale could not be random either.
Yes, I'm including local hidden variables. Bell's analysis has to do with the formal requirements of lhv models of entangled states, not with what might or might not exist in an underlying quantum reality. The HUP has to do with the relationship between measurements on canonically conjugate variables. The product of the statistical spreads is equal to or greater than Planck's constant. Quantum fluctuations come from an application of the HUP. None of this tells us whether or not there is an underlying quantum reality. I would suppose that most everybody believes there is. It also doesn't tell us whether Nature is local or nonlocal. So, the standard assumption is that it's local.


SW VandeCarr said:
None of this either proves or disproves D. I don't think it can be done.
I agree.

SW VandeCarr said:
It seems to be essentially a metaphysical issue.
I suppose so, but not entirely insofar as metaphysical constructions can be evaluated wrt our observations of Nature. And I don't think that one has to give up anything that's accepted as standard mainstream physical science to believe in a locally deterministic underlying reality.
 
  • #95
kote said:
Local causation stemming from real classical particles and waves has been falsified by experiments. EPRB type experiments are particularly illustrative of this fact.
These are formal issues. Not matters of fact about what is or isn't true about an underlying reality.

kote said:
If there is evidence of deep reality being deterministic, I would like to know what it is :).
It's all around you. Order and predictability is the rule in physical science, not the exception. The deterministic nature of things is apparent on many levels, even wrt quantum experimental phenomena. Some things are impossible to predict, but, in general, things are not observed to happen independently of antecedent events. The most recent past (the present) is only slightly different from 1 second before. Take a movie of any physical process that you can visually track and look at it frame by frame.

There isn't any compelling reason to believe that there aren't any fundamental deterministic dynamics governing the evolution of our universe, or that the dynamics of waves in media is essentially different wrt any scale of behavior. In fact, quantum theory incorporates lots of classical concepts and analogs.

kote said:
As for the universe being locally deterministic, this has been proven impossible.
This is just wrong. Where did you get this from?

Anyway, maybe you should start a new thread here in the philosophy forum on induction and/or determinism. I wouldn't mind discussing it further, but I don't think we're helping ueit wrt the thread topic.
 
  • #96
=ThomasT;I suppose so, but not entirely insofar as metaphysical constructions can be evaluated wrt our observations of Nature. And I don't think that one has to give up anything that's accepted as standard mainstream physical science to believe in a locally deterministic underlying reality.

I think we may have to give up more if we want D. You didn't address my thermodynamic argument. Entropy is indeed a measure of our uncertainty regarding the state of a system. We already agreed that our uncertainty has nothing to do with nature. Yet how is it that we can measure entropy as the relation Q/T? The following shows how we can derive the direct measure of entropy from first principles (courtesy of Count Iblis):

http://en.wikipedia.org/wiki/Fundamental_thermodynamic_relation#Derivation_from_first_principlesThe assumption behind this derivation is that the position and momentum of each of N particles (atoms or molecules) in a gas are uncorrelated (statistically independent). However D doesn't allow for statistical independence. Under D the position and momentum of each particle at any point in time is predetermined. Therefore there is full correlation of the momenta of all the particles.

What is actually happening when the experimenter heats the gas and observes a change in the Q/T relation (entropy increases)? Under D the whole experiment is a predetermined scenario with the actions of the experimenter included. The experimenter didn't decide to heat the gas or even set up the experiment. The experimenter had no choice. She or he is an actor following the deterministic script. Everything is correlated with everything else with measure one. There really is no cause and effect. There is only a the predetermined succession of states. Therefore you're going to have to give up the usual (strong) form of causality where we can perform experimental interventions to test causality (if you want D).

Causality is not defined in mathematics or logic. It's usually defined operationally where, given A is the necessary, sufficient and sole cause of B, if you remove A, then B cannot occur. Well under D we cannot remove A unless it was predetermined that p(B)=0. At best, we can have a weak causality where we observe a succession of states that are inevitable.
 
Last edited:
  • #97
kote said:
ueit,

Statistics are calculated mathematically from individual measurements. They are aggregate observations about likelihoods. Determinism deals with absolute rational causation. It would be an inductive fallacy to say that statistics can tell us anything about the basic mechanisms of natural causation.

There is no fallacy here. One may ask what deterministic models could fit the statistical data. If you are lucky you may falsify some of them and find the "true" one. There is no guarantee of success but there is no fallacy either.

Linguistically, we may use statistics as 2nd order explanations in statements of cause and effect, but it is understood that statistical explanations never represent true causation. If determinism exists, there must necessarily be some independent, sufficient, underlying cause - some mechanism of causation.

I don't understand the meaning of "independent" cause. Independent from what? Most probable, the "cause" is just the state of the universe in the past.

Because of the problem of induction, no irreducible superdeterministic explanation can prove anything about the first order causes and effects that basic physics is concerned with.

No absolute proof is possible in science and I do not see any problem with that. Finding a SD mechanism behind QM could lead to new physics and I find this interesting.

The very concept of locality necessarily implies classical, billiard-ball style, momentum transfer causation. The experiments of quantum mechanics have conclusively falsified this model.

This is false. General relativity or classical electrodynamics are local theories, yet they are not based on the billiard-ball concept but on fields.
 
  • #98
ThomasT said:
Lhv formalisms of quantum entangled states are ruled out -- not the possible existence of lhv's. As things stand now, there's no conclusive argument for either locality or nonlocality in Nature. But the available physical evidence suggests that Nature behaves deterministically according to the principle of local causation.

You've already agreed that the method of changing the polarizer settings, as well as whether or not they're changed while the emissions are in flight, incident on the polarizers, is irrelevant to the rate of joint detection.

The reason that Bell inequalities are violated has to do with the formal requirements due to the assumption of locality. This formal requirement also entails statistical independence of the accumulated data sets at A and B. But entanglement experiments are designed and executed to produce statistical dependence vis the pairing process.

There's no way around this unless you devise a model that can actually predict individual detections.

Or you could reason your way around the difficulty by noticing that the hidden variable (ie., the specific quality of the emission that might cause enough of it to be transmitted by the polarizer to register a detection) is irrelevant wrt the rate of joint detection (the only thing that matters wrt joint detection is the relationship, presumably produced via simultaneous emission, between the two opposite-moving disturbances) . Thus preserving the idea that the correlations are due to local interactions/transmissions, while at the same time modelling the joint state in a nonseparable form. Of course, then you wouldn't have an explicitly local, explicitly hidden variable model, but rather something along the lines of standard qm.

I think I should better explain what I think it does happen in an EPR experiment.

1. At source location, the field is a function of the detectors' state. Because the model is local this information is "old". If the detectors are at 1 ly away, then the source "knows" the detectors' state as it was 1 year in the past.

2. From this available information and the deterministic evolution law the source "computes" the future state of the detectors when the particles arrive there.

3. The actual spin of the particles is set at the moment of emission and does not change on flight.

4. The correlations are a direct result of the way the source "chooses" the spins of the entangled particles. It so happens that this "choice" follows Malus's law.

In conclusion, changing the detectors before detection has no relevance on the experimental results because these changes are taken into account when the source "decides" the particles' spin. Bell's inequality is based on the assumption that the hidden variable that determines the particle spin is not related to the way the detectors are positioned. The above model denies this. Both the position of the detector and the spin of the particle are a direct result of the past field configuration.
 
  • #99
SW VandeCarr said:
The assumption behind this derivation is that the position and momentum of each of N particles (atoms or molecules) in a gas are uncorrelated (statistically independent). However D doesn't allow for statistical independence. Under D the position and momentum of each particle at any point in time is predetermined. Therefore there is full correlation of the momenta of all the particles.

The trajectory of the particle is a function of the field produced by all other particles in the universe, therefore D does not require a strong correlation between the particles included in the experiment. Also I do not see the relevance of predetermination to the issue of statistical independence. The digits of Pi are strictly determined, yet no correlation exists between them. What you need for the entropy law to work is not absolute randomness but pseudorandomness.
 
  • #100
ueit said:
The trajectory of the particle is a function of the field produced by all other particles in the universe, therefore D does not require a strong correlation between the particles included in the experiment. Also I do not see the relevance of predetermination to the issue of statistical independence. The digits of Pi are strictly determined, yet no correlation exists between them. What you need for the entropy law to work is not absolute randomness but pseudorandomness.

Correlation is the degree of correspondence between two random variables. There are no random variables involved in the computation of pi.

Under D, probabilities only reflect our uncertainty. They have nothing to do with nature (as distinct from ourselves). Statistical independence is an assumption based on our uncertainty. Ten fair coin tosses are assumed to be statistically independent based on our uncertainty of the outcome. We imagine there are 1024 possible outcomes, Under D there is only one possible outcome and if we had perfect information we could know that outcome.

Under D, not only is the past invariant, but the future is also invariant. If we had perfect information the future would be as predictable as the past is "predictable". It's widely accepted that completed events have no information value (ie p=1) and that information only exists under conditions of our uncertainty.

I agree that with pseudorandomness the thermodynamic laws work, but only because of our uncertainty given we lack the perfect information which could be available (in principle) under D.

EDIT: When correlation ([tex]R^{2}[/tex]) is unity, it is no longer probabilistic in that no particles move independently of any other. Under D all particle positions and momenta are predetermined. If a full description of particle/field states is in principle knowable in the past, it is knowable in future under D.
 
Last edited:
  • #101
SW VandeCarr said:
What is actually happening when the experimenter heats the gas and observes a change in the Q/T relation (entropy increases)? Under D the whole experiment is a predetermined scenario with the actions of the experimenter included. The experimenter didn't decide to heat the gas or even set up the experiment. The experimenter had no choice. She or he is an actor following the deterministic script. Everything is correlated with everything else with measure one. There really is no cause and effect. There is only a the predetermined succession of states. Therefore you're going to have to give up the usual (strong) form of causality where we can perform experimental interventions to test causality (if you want D).

Causality is not defined in mathematics or logic. It's usually defined operationally where, given A is the necessary, sufficient and sole cause of B, if you remove A, then B cannot occur. Well under D we cannot remove A unless it was predetermined that p(B)=0. At best, we can have a weak causality where we observe a succession of states that are inevitable.
The assumption of determinism and the application of probabilities are independent considerations.

I wouldn't separate causality into strong and weak types. We observe invariant relationships, or predictable event chains, or, as you say, "a succession of states that are inevitable". Cause and effect are evident at the macroscopic scale.

Determinism is the assumption that there are fundamental dynamical rules governing the evolution of any physical state or spatial configuration. We already agreed that it can't be disproven.

The distinguishing characteristic of ueit's proposal isn't that it's deterministic. What sets it apart is that it involves an infinite field of nondiminishing strength centered on polarizer or other filtration/detection devices and/or device combinations and propagating info at c to emission devices thereby determining the time and type of emission, etc., etc. So far, it doesn't make much sense to me.

We already have a way of looking at these experiments which allows for an implicit, if not explicit, local causal view.

Anyway, his main question about arguments against the assumption of determinism has been answered, and I thought we agreed on this -- there aren't any good ones.
 
Last edited:
  • #102
ThomasT said:
Anyway, his main question about arguments against the assumption of determinism has been answered, and I thought we agreed on this -- there aren't any good ones.

Of course you can't disprove or really even argue against metaphysical assumptions (except with other metaphysical assumptions). Nature appears effectively deterministic at macro-scales if we disregard human intervention and human activity in general. At quantum scales, it remains to be proven that hidden variables exist. (Afaik, there is no real evidence for hidden variables).Therefore strict (as opposed to effective) determinism remains a matter of taste. In any case, to the extent that science uses probabilistic reasoning, science is not based de facto on strict determinism. Thermodynamics is based almost entirely on probabilistic reasoning. Quantum mechanics is deterministic only insofar as probabilities are determined and confirmed by experiment.

(Note: I'm using "effective determinism" in terms of what we actually observe within the limits of measurement, and "strict determinism" as a philosophical paradigm.)
 
Last edited:
  • #103
SW VandeCarr said:
At quantum scales, it remains to be proven that hidden variables exist. (Afaik, there is no real evidence for hidden variables).
I think everybody should believe that hidden variables exist, ie., that there are deeper levels of reality than our sensory faculties reveal to us. The evidence is electrical, magnetic, gravitational, etc., phenomena.

Whether local hidden variable mathematical formulations of certain experimental preparations are possible is another thing altogether. This was addressed by Bell.

Ueit is interested in lhv models. Bell says we're not going to have them for quantum entangled states, and so far nobody has found a way around his argument.
 
  • #104
ThomasT said:
Anyway, his main question about arguments against the assumption of determinism has been answered, and I thought we agreed on this -- there aren't any good ones.


That's right, but how is it different to the idea that we are living in the Matrix and if no good arguments can be put forward against it, is that a model of the universe that you think should even be considered by science?
 
  • #105
WaveJumper said:
That's right, but how is it different to the idea that we are living in the Matrix and if no good arguments can be put forward against it, is that a model of the universe that you think should even be considered by science?
One difference is that there are some good reasons to believe in determinism. It seems that our universe is evolving in a somewhat predictable way. There are many particular examples of deterministic evolution on various scales. This suggests some pervading fundamental dynamic(s). So, physics makes that assumption.

We might be in some sort of Matrix. But there's no particular reason to think that we are. The question is, does our shared, objective reality seem more deterministic the more we learn about it?
 

Similar threads

Back
Top