Arguments Against Superdeterminism

  • Thread starter Thread starter ueit
  • Start date Start date
  • Tags Tags
    Superdeterminism
Click For Summary
Superdeterminism (SD) challenges the statistical independence assumed in quantum mechanics, particularly in the context of Bell's Theorem, suggesting that all events, including human decisions, are predetermined. This theory is often dismissed in scientific discussions, with calls for clearer arguments against it. Critics argue that SD implies a lack of free will, raising questions about the origins of human creativity and technological advancements, such as cell phones and colliders. The conversation also touches on the philosophical implications of determinism, questioning the nature of existence and the illusion of self. Ultimately, the discussion highlights the need for a comprehensive theory that reconciles quantum and classical behaviors while addressing the implications of determinism.
  • #61
ueit said:
As ThomasT says, there is no difference between superdeterminism and determinism.

Regarding your argument, AFAIK a reversible system has a constant entropy. If our universe is SD then it has a constant entropy. However we cannot measure the entropy of the universe, only of a part of it. But this part is not reversible because the interactions with the environment are not taken into account, therefore the entropy may increase.

I saw your post after I posted my last post quoting ThomasT. If you concede that the entropy of the universe is constant under SD, what does that say about the Second Law? As far as local environments, I'm not sure your argument rescues the Second Law. Local entropy may increase, decrease, or remain constant with the background of a constant entropy universe. In any case, entropy is imaginary if there are no objective probabilities. IMHO, you can have SD or the Second Law, but not both.
 
Last edited:
Physics news on Phys.org
  • #62
SW VandeCarr said:
Just a short response for now. What kind of entropy does the equation I cited (S_{B}) describe? If it's the "fine grained" entropy and it's always zero, what good is it?


That formula is completely general and can describe any kind of enetropy. To get the entropy we use in practice, you always have to use a coarse graining procedure to define the probabilities.

If you have a given number number of molecules in a given volume and you would exactly specify the energy of the system, then there is only one quantum state the system vcan be in. So, what you do is you explicitely specify some small energy uncertainty delta E and then count the number of microstates that are within that small energy nterval. Then, the fundamental assumption being that all these microstates are equally likely, yields the Boltzmann formula:

S = k Log(Omega)

where Omega is the number of microstates.


Check out http://en.wikipedia.org/wiki/Fundamental_thermodynamic_relation#Derivation_from_first_principles"

dS = dQ/T

from S = k Log(Omega)


to see the importance of specifying the delta E.

So, in the end we arrive at:

dS = dE/T + X dx/T

where X is the generalized force and x and external parameter. And for a reversible change we have dE + X dx = dQ
 
Last edited by a moderator:
  • #63
ueit said:
In order to falsify the mechanism one should propose a clear mathematical structure. I am not able to propose it. But if you want a "cheap" example of a SD theory just take Bohm's interpretation, as it is, and replace in the equation the instantaneous, "present" particle distribution with a past distribution so that locality is observed. The "present" distribution is then "predicted" by the particle from the past one.

It would be more interesting to find a formulation that is not such obviously ad-hoc, but for the time being I was only interested if there are well formulated arguments against SD.
Lets push your model a bit deeper and see if it survives. We know from QCD, that more than 99% of all mass of matter is concentrated in the nuclei and the mass of quarks only adds up to a few percent. The rest of the mass(>96%) comes from virtual gluons that randomly pop into existence and disappear again from the quantum vacuum. The Higgs field is theorized to make up the other few percent and give mass to quarks and electrons through virtual Higgs bosons, and it is thought to derive its energy from the quantum vacuum too. So it appears we are very close to having evidence that all of physical matter in space-time emerges from timeless and spaceless Planck scale through random quantum fluctuations. This may resolve the biggest of all questions - "Why is there something instead of nothing?" through adjustments to both how we view "something" and how we view "nothing". But one could wonder - are quantum fluctuations really random if they have the ability to create such immense volumes of information at our macro scale(the whole structure of reality as we see it)? Only infinity and the notion that given infinity, everything that can occur will occur in the quantum vacuum can provide a somewhat coherent explanation. What's your opinion?
 
Last edited:
  • #64
Count Iblis said:
That formula is completely general and can describe any kind of enetropy. To get the entropy we use in practice, you always have to use a coarse graining procedure to define the probabilities.

Thanks for clarifying that. Given that the Gibbs formula is good for both fine and coarse grained entropy, it would seem that SD would restrict the Second Law to specific experiments at most, but that the Second Law is not a universal principle. Therefore, with SD, we cannot explain the arrow of time in terms of the Second Law, nor can we justify entropy as an objective concept. (see my previous posts.) SD may even mean that we have to give up the idea of randomness at all scales, even random quantum fluctuations at Planck scales.
 
Last edited:
  • #65
SW VandeCarr, it seems to me that probabilistic descriptions and the concepts of entropy and information are independent from the assumption of determinism.

It's assumed that the underlying evolution of any physical system is deterministic. This assumption is objective in the sense that, and insofar as, it's inferrable from the development of dynamical laws and principles that correspond with the objective data.

However, the assumption of determinism doesn't inform us about the details of the underlying (real) state or evolution of any physical system. Even though it's assumed that any physical system can be in only one (real) state at any given time, those (real) states are generally unknown and the behavior of observable objects is increasingly upredictable as the dependence of the behavior on unknown factors increases. So, probabilistic descriptions are often necessary, and their use doesn't contradict the underlying assumption of determinism.

You wrote:
We only imagine there are multiple possible outcomes. In reality (under SD) there is only one possible outcome and probabilities have no objective meaning.
If the underlying states and dynamics were known, then probabilistic descriptions would be obviated. But they aren't.

However, this doesn't mean that probabilistic descriptions aren't objective. It isn't our imaginations that tell us that the tossed coin is going to come up either heads or tails.

How can we talk about the entropy of the universe increasing when there is no objective entropy?
I don't know quite how to think about the entropy of the universe. Is there a definitive statement about the entropy of the universe? I've seen several different values given, none of which are 0.

In any case, entropy is connected with the dissipation of energy and the arrow of time -- the archetypal example of which is the radiative arrow. Drop a pebble into a smooth pool of water. The evolution of the wavefront is deterministic, isn't it?

I don't think your argument is why ueit's proposal should be rejected. There are other reasons, not the least of which is the notion of fields whose strength doesn't diminish with distance.
 
  • #66
SW VandeCarr said:
Therefore, with SD, we cannot explain the arrow of time in terms of the Second Law, nor can we justify entropy as an objective concept. (see my previous posts.) SD may even mean that we have to give up the idea of randomness at all scales, even random quantum fluctuations at Planck scales.
The Second Law doesn't explain the arrow of time. It's just a generalization of it. Since observations are so far in agreement with it, it's kept.

Entropy, in its many forms, is very much an objective concept insofar as it depends on measurements.

The assumption of determinism isn't at odds with randomness. Randomness refers to unpredictability. We use words like random and spontaneous when we can't specify causal antecedents. This doesn't mean that there aren't any.
 
  • #67
ThomasT said:
However, this doesn't mean that probabilistic descriptions aren't objective. It isn't our imaginations that tell us that the tossed coin is going to come up either heads or tails.

I don't know quite how to think about the entropy of the universe. Is there a definitive statement about the entropy of the universe? I've seen several different values given, none of which are 0.

The entropy of the universe, whatever it might be, is definitely not 0. ueit has already agreed that SD implies a constant entropy for the universe and if the universe is in just one possible spacetime state (block universe), all events occur with a real probability of one, which yields zero when plugged into the Gibbs equation for entropy. This is an argument against SD.

Regarding probabilities, if the uncertainty is due only to a lack of complete information, the probabilities are not objective. They would be objective only if we assume that nature is fundamentally probabilistic and true randomness actually exists.
 
Last edited:
  • #68
SW VandeCarr said:
ueit has already agreed that SD implies a constant entropy for the universe ...
He's also agreed that SD is synonymous with standard determinism. There's a better name for what he's proposing, which I'll suggest to him. Anyway, determinism doesn't imply a constant entropy.

SW VandeCarr said:
... and if the universe is in just one possible spacetime state (block universe), all events occur with a real probability of one, which yields zero when plugged into the Gibbs equation for entropy. This is an argument against SD.
I don't think the block universe model should be taken literally, as a realistic representation. The universe is assumed to be in one possible, transitory, spatial configuration at any given time wrt evolutionary (deterministic), presentist models.

Saying that all events occur with a real probability of one is meaningless. Probabilities are applicable before, not after, the facts of observation.

SW VandeCarr said:
Regarding probabilities, if the uncertainty is due only to a lack of complete information, the probabilities are not objective.
If we had complete information we wouldn't need probabilities. What is non objective about the observation that a tossed coin will come up either heads or tails?

SW VandeCarr said:
They would be objective only if we assumed that nature was fundamentally probabilistic and true randomness actually existed.
No. Probabilities are objective when they're based on observable possibilities. Randomness refers to our observations, not the deep reality of Nature. True randomness does exist. There are lots of things that we really can't predict. :smile:

Why would we assume that Nature is fundamentally probabilistic when there are so many reasons to believe that it isn't?
 
  • #69
ThomasT said:
Why would we assume that Nature is fundamentally probabilistic when there are so many reasons to believe that it isn't?

I'm not assuming anything. I don't know. I'm saying if...then. Given determinism, the future is as well determined as the past. We just don't know for certain what it will be. Therefore, it's our uncertainty that probabilities quantify.
 
  • #70
SW VandeCarr said:
I'm not assuming anything. I don't know. I'm saying if...then. Given determinism, the future is as well determined as the past. We just don't know for certain what it will be. Therefore, it's our uncertainty that probabilities quantify.
Yes, it's our uncertanties that probabilities quantfy. We assume an underlying determinism (for many good reasons). But we don't know the details of that underlying determinism. Hence, the need for probabilistic descriptiions.

ueit asks for arguments against determinism. Afaik, there aren't any -- at least no definitive ones. And, as far as I can tell from this thread you haven't given any.

But, ueit's proposed explanation for Bell-type correlations is problematic for reasons that I've given.

So, where are we?
 
  • #71
ThomasT said:
Yes, it's our uncertanties that probabilities quantfy. We assume an underlying determinism (for many good reasons). But we don't know the details of that underlying determinism. Hence, the need for probabilistic descriptiions.

ueit asks for arguments against determinism. Afaik, there aren't any -- at least no definitive ones. And, as far as I can tell from this thread you haven't given any.

But, ueit's proposed explanation for Bell-type correlations is problematic for reasons that I've given.

So, where are we?

I think you're trying to have it both ways. First, just to be clear, we need to quantify uncertainty. Uncertainty is a function of a probability: U=4(p(1-p)) where 4 just scales the measure to the interval [0,1]. It's clear uncertainty is maximal when p=0.5 and 0 when p=0
or p=1.

Now you've already agreed that probability measures our uncertainty. What does our uncertainty have to do with nature? Someone tosses a fair coin behind a curtain. The coin is tossed but you don't see it. For the "tosser" uncertainty is 0. For you, it's 1.

Now we have deterministic theories that are not time dependent. The laws of physics are presumed to hold in the future just as in the past. The charge of an electron does not change with time. If we have determinism (D), it's clear that any outcome of a time dependent process is also entirely predictable in principle. That means randomness is only a reflection of our current state of knowledge. If you could have perfect information as to some future time dependent outcome, you have U=0. This corresponds to p=1 or p=0. This is what I meant when I said that under D, with perfect information, future events occur with a real probability 1 (or 0). In effect under D, we don't have probabilities in nature. We only have our uncertainty about nature.
 
  • #72
SW VandeCarr said:
I think you're trying to have it both ways.
Trying to have what both ways?

SW VandeCarr said:
First, just to be clear, we need to quantify uncertainty. Uncertainty is a function of a probability: U=4(p(1-p)) where 4 just scales the measure to the interval [0,1]. It's clear uncertainty is maximal when p=0.5 and 0 when p=0 or p=1.
The probability already quantifies the uncertainty, doesn't it?

SW VandeCarr said:
Now you've already agreed that probability measures our uncertainty.
I agreed that probabilities are quantitative expressions of our uncertainties.

SW VandeCarr said:
What does our uncertainty have to do with nature?
That's what I was wondering when you said ...
SW VandeCarr said:
They (probabilities) would be objective only if we assume that nature is fundamentally probabilistic and true randomness actually exists.
... and I pointed out that probabilities are objective when they're based on observable possibilities, and that randomness refers to our observations, not the underlying reality.

SW VandeCarr said:
Someone tosses a fair coin behind a curtain. The coin is tossed but you don't see it. For the "tosser" uncertainty is 0. For you, it's 1.
OK.

SW VandeCarr said:
Now we have deterministic theories that are not time dependent. The laws of physics are presumed to hold in the future just as in the past. The charge of an electron does not change with time. If we have determinism (D), it's clear that any outcome of a time dependent process is also entirely predictable in principle.
OK, until we get to quantum stuff where that pesky quantum of action becomes significant.

SW VandeCarr said:
That means randomness is only a reflection of our current state of knowledge.
I agree.

SW VandeCarr said:
If you could have perfect information as to some future time dependent outcome, you have U=0. This corresponds to p=1 or p=0. This is what I meant when I said that under D, with perfect information, future events occur with a real probability 1 (or 0).
Of course, an event either has occurred or it hasn't.

SW VandeCarr said:
In effect under D, we don't have probabilities in nature. We only have our uncertainty about nature.
Ok, so I guess we agree on this. So far, wrt ueit's OP we don't have any good argument(s) against the assumption of determinism.

What about a field associated with filtering-measuring devices in optical Bell tests, whose strength doesn't diminish with distance, and whose values determine or trigger emissions? (Lets call it the Ueit Field.) Any arguments against that?
 
  • #73
ThomasT said:
So far, wrt ueit's OP we don't have any good argument(s) against the assumption of determinism.

Classical objective causation, the Newtonian billiard-ball style particle collision causation, has been disproven by quantum mechanical experiments. Historically, this conception of classical physical causation is what was meant by determinism in nature. If that's what we mean now (but I don't think it is) then the conversation is over.

Quantum mechanics also suggests that there are forces in the universe that are inherently unknowable. We can only directly measure the location of a particle on our detector, we can't directly measure the quantum forces that caused it to end up at that specific location instead of another. If this aspect of QM is true, then deterministic natural laws, if they exist, are inherently unknowable themselves (and you can further argue whether or not an unfalsifiable physical construct can in principle be considered to exist at all).

Of course, the problem of induction already disallows for proof of physical determinism.

Spinoza and others have argued that physical determinism is rationally guaranteed by the laws of logic. Arguments from logic are the only type that have any hope of proving determinism in nature.

On a practical level it is necessary to assume determinism in nature, and on a macro level things seem to be pretty deterministic. However, on the quantum level where this causation is supposed to be taking place, we have absolutely no idea how anything actually works. We cannot visualize any sort of causation on the quantum level, and we cannot come close to predicting any results.

An assumption of determinism stems from a macroscopic point of view that presupposes a quantum-scale mechanism for physical causation. However, when you actually look at the quantum level, no evidence for any such mechanism can be found, and it has been argued that no such mechanism could exist at all.
 
  • #74
ThomasT said:
The probability already quantifies the uncertainty, doesn't it?

Not really. You approach certainty as as you move toward p=0 or p=1 from p=0.5. Uncertainty is maximal at p=0.5 Look at the equation.

OK, until we get to quantum stuff where that pesky quantum of action becomes significant.

Well, D doesn't exclude anything. It would seem you're going to have to accept hidden variables if you want D or settle for a universe that is truly random at it's smallest scales. That's what I mean by trying to having it both ways.

What about a field associated with filtering-measuring devices in optical Bell tests, whose strength doesn't diminish with distance, and whose values determine or trigger emissions? (Lets call it the Ueit Field.) Any arguments against that?

I'm not a physicist and I don't feel I'm qualified to venture an opinion on that. I feel I am qualified enough in statistics and probability to venture an opinion in this area (in a frequentist or Bayesian sense) both in the methods and the limitations of these methods.
I did just offer a tentative opinion re hidden variables. You can tell me whether I'm correct or not
 
  • #75
Oh, right, this thread is on superdeterminism, not determinism :).

As for superdeterminism... the only way it can be logically explained is if one of your causes is superfluous. Superdeterminism is: A and B each independently caused C.

So, what caused C? Answer: A or B, take your pick.

Logically this is impossible unless A and B are actually the same thing but are being described two separate ways. In essence, superdeterminism is just straight determinism with the addition of superfluous 2nd order explanations.

Mental activity is often ascribed such 2nd order explanatory power over physical events. But the fact that physical events (assuming we have deterministic natural laws) can be explained entirely in physical terms doesn't prove that logic is being violated and superdeterminism is right, it just proves that mental activity at some level must be reducible to physical activity (ie our brains).

Bottom line: superdeterminism violates logic. It's logically impossible in its true sense. Additionally, 2nd order explanations from properties emergent on 1st order explanations are interesting, but they don't solve any problems in basic physics, which is only concerned with 1st order causation.
 
Last edited:
  • #76
kote said:
Oh, right, this thread is on superdeterminism, not determinism :).

The OP has agreed (as per ThomasT) that SD and D are the same.
 
  • #77
SW VandeCarr said:
You approach certainty as as you move toward p=0 or p=1 from p=0.5. Uncertainty is maximal at p=0.5.
That seems pretty quantitative to me. :smile: Your equation just quantifies it in a different way.

SW VandeCarr said:
Well, D doesn't exclude anything. It would seem you're going to have to accept hidden variables if you want D or settle for a universe that is truly random at it's smallest scales. That's what I mean by trying to having it both ways.
Yes, of course, the presumed existence of hidden variables comes with the assumption of determinism. I don't think anybody would deny that hidden variables exist.

Assuming determinism, then randomness exists only at the instrumental level, the level of our sensory apprehension.
 
  • #78
kote said:
Oh, right, this thread is on superdeterminism, not determinism :).

As for superdeterminism... the only way it can be logically explained is if one of your causes is superfluous. Superdeterminism is: A and B each independently caused C.

So, what caused C? Answer: A or B, take your pick.

Logically this is impossible unless A and B are actually the same thing but are being described two separate ways. In essence, superdeterminism is just straight determinism with the addition of superfluous 2nd order explanations.

Mental activity is often ascribed such 2nd order explanatory power over physical events. But the fact that physical events (assuming we have deterministic natural laws) can be explained entirely in physical terms doesn't prove that logic is being violated and superdeterminism is right, it just proves that mental activity at some level must be reducible to physical activity (ie our brains).

Bottom line: superdeterminism violates logic. It's logically impossible in its true sense. Additionally, 2nd order explanations from properties emergent on 1st order explanations are interesting, but they don't solve any problems in basic physics, which is only concerned with 1st order causation.

Is this the standard definition of superdeterminism?
 
  • #79
ThomasT said:
Is this the standard definition of superdeterminism?

It's slightly more nuanced, but basically, yes. Superdeterminism is a second level of causation on top of one cause per effect causation. It is not necessarily true that the second level is reducible to the first level in any of its individual parts, but as a system, the second level of explanation (cause) emerges fully from the first level. Additionally, in superdeterminism, the first order explanation is always necessary and sufficient on its own.

For example, you may not find the sensation of pain anywhere in the micro level atomic world, but the sensation of pain may still emerge as a unique property from the correct systemic arrangement of atoms in my brain along with corresponding atomic impulses in my nerves etc.

This second order of properties that emerges from the first order properties of the atoms can be used in causal statements. "The pain caused me to pull my hand back from the fire." This, however, is a "super-" cause, emerging from the first order cause (or explanation) involving atoms bumping into each other according to the laws of nature at the micro level.

Edit: Now that I think about it, I may have put a little too much interpretation in my explanation. Superdeterminism just means events can have more than one immediate and sufficient cause. However, I think it's pretty trivial to conclude that this is logically impossible unless all causes are in essence the same, in which case you get what I just described.
 
Last edited:
  • #80
kote said:
It's slightly more nuanced, but basically, yes. Superdeterminism is a second level of causation on top of one cause per effect causation. It is not necessarily true that the second level is reducible to the first level in any of its individual parts, but as a system, the second level of explanation (cause) emerges fully from the first level. Additionally, in superdeterminism, the first order explanation is always necessary and sufficient on its own.

For example, you may not find the sensation of pain anywhere in the micro level atomic world, but the sensation of pain may still emerge as a unique property from the correct systemic arrangement of atoms in my brain along with corresponding atomic impulses in my nerves etc.

This second order of properties that emerges from the first order properties of the atoms can be used in causal statements. "The pain caused me to pull my hand back from the fire." This, however, is a "super-" cause, emerging from the first order cause (or explanation) involving atoms bumping into each other according to the laws of nature at the micro level.

Edit: Now that I think about it, I may have put a little too much interpretation in my explanation. Superdeterminism just means events can have more than one immediate and sufficient cause. However, I think it's pretty trivial to conclude that this is logically impossible unless all causes are in essence the same, in which case you get what I just described.

So superdeterminism just refers to the principles and laws governing higher order physical regimes?
 
  • #81
ThomasT said:
So superdeterminism just refers to the principles and laws governing higher order physical regimes?

It doesn't necessarily have to do with the physical realm at all. Perhaps everything is in our minds and mental causes and effects are basic to reality. We can even talk about causation in a made up universe with its own rules. Superdeterminism is about logical laws of causation. It's a mode of explanation positing overcausation.

Superdeterminism: One effect can have multiple immediate and sufficient causes.

...but read above for some of how this can and can't work :)
 
  • #82
kote, I think you've lost me, so I'm going to go back to your post #73 (on determinism) and work my way to your last post on superdeterminism, nitpicking as I go.

kote said:
Classical objective causation, the Newtonian billiard-ball style particle collision causation, has been disproven by quantum mechanical experiments.
Not disproven, but supplanted by qm wrt certain applications. Classical physics is used in a wide variety of applications. Sometimes, in semi-classical accounts, part of a system is treated classically and the other part quantum mechanically.

kote said:
Historically, this conception of classical physical causation is what was meant by determinism in nature. If that's what we mean now (but I don't think it is) then the conversation is over.
I think that's pretty much what's meant -- along the lines of an underlying deterministic wave mechanics. But the conversation isn't over, because this view is reinforced by qm and experiments, not disproven.

kote said:
Quantum mechanics also suggests that there are forces in the universe that are inherently unknowable. We can only directly measure the location of a particle on our detector, we can't directly measure the quantum forces that caused it to end up at that specific location instead of another. If this aspect of QM is true, then deterministic natural laws, if they exist, are inherently unknowable themselves (and you can further argue whether or not an unfalsifiable physical construct can in principle be considered to exist at all).
Yes, quantum theory, at least wrt the standard interpretation, does place limits on what we can know or unambiguously, objectively say about our universe. The assumption of determinism is just that, an assumption. It's based on what we do know about our universe. It places limits on the observations that might result from certain antecedent conditions. It's falsifiable in the sense that if something fantastically different from what we expect vis induction were to be observed, then our current notion of causal determinism would have to be trashed.

kote said:
Of course, the problem of induction already disallows for proof of physical determinism.
Yes, it can't be proven. Just reinforced by observations. Our windows on the underlying reality of our universe are small and very foggy. However, what is known suggests that the deep reality is deterministic and locally causal.

Induction is justified by its continued practical utility. A general understanding for why induction works at all begins with assumption of determinism.

kote said:
On a practical level it is necessary to assume determinism in nature, and on a macro level things seem to be pretty deterministic. However, on the quantum level where this causation is supposed to be taking place, we have absolutely no idea how anything actually works. We cannot visualize any sort of causation on the quantum level, and we cannot come close to predicting any results.
It's almost that bleak, but maybe not quite. There's some idea of how some things work. There are indications that the deep reality of our universe is essentially wave mechanical. But try efficiently modelling some process or other exclusively in those terms.

kote said:
An assumption of determinism stems from a macroscopic point of view that presupposes a quantum-scale mechanism for physical causation.
Ok.

kote said:
However, when you actually look at the quantum level, no evidence for any such mechanism can be found, and it has been argued that no such mechanism could exist at all.
As you indicated earlier, we can't actually look at the quantum level, but the assumption of determinism is kept because there is evidence that there are quantum-scale mechanisms for physical causation.
 
  • #83
ThomasT said:
Induction is justified by its continued practical utility.

That's 'justifying' induction via induction. Using induction may be practical, even unavoidable, but that sort of 'justification' reduces to a common-sense circular argument, nothing more. And common sense is notoriously unreliable.

Induction is only useful when it works, and its actually a classic example of an 'unjustified' assumption. Nothing wrong with that really, but it places limits on its utility.
 
  • #84
kote, before I begin nitpicking again, let me ask -- are you saying that superdeterminism refers to the fact that there are different levels of explanation, ie., either in terms of underlying dynamics or in terms of phenomena which emerge from those underlying dynamics?

That's what you seem to be saying by "superdeterminism: one effect can have multiple immediate and sufficient causes", and in your elaboration on that.

Or, are you saying that the higher order explanation might be referred to as superdeterministic?

If either, I don't think that that's what ueit meant to convey in using the term.

kote said:
In essence, superdeterminism is just straight determinism with the addition of superfluous 2nd order explanations.
A higher order explanation isn't necessarily superfluous, though it might, in a certain sense, be considered as such if there exists a viable lower order explanation for the same phenomenon.

In most cases, we have presumably higher order explanations in lieu of discovering lower order explanations that are presumed to exist vis the assumption of an underlying deterministic dynamics.

Anyway, since there are multiple levels of explanation, your statement above would seem to reduce to superdeterminism = determinism. Which is the conclusion we came to earlier in this thread. In other words, the term superdeterminism is superfluous.
 
  • #85
JoeDawg said:
That's 'justifying' induction via induction. Using induction may be practical, even unavoidable, but that sort of 'justification' reduces to a common-sense circular argument, nothing more.
We can begin to try to understand the deep reason for why induction works by positing that the underlying reality of our universe is causally deterministic, but that's just a vague precursor to a detailed answer to the question in terms of fundamental dynamics.

The fact that we continue to behave inductively can, at the level of our behavior, be understood as being due to the fact that it usually works.

JoeDawg said:
And common sense is notoriously unreliable.
If it were that unreliable, it wouldn't be common.

JoeDawg said:
Induction is only useful when it works ...
It usually does work.

JoeDawg said:
... and its actually a classic example of an 'unjustified' assumption. Nothing wrong with that really, but it places limits on its utility.
We can treat induction as an assumption, or as a method of reasoning, but it's more generally a basic behavioral characteristic. We behave inductively. It's part of our common behavioral heritage, our common sense.
 
  • #86
ThomasT said:
If it were that unreliable, it wouldn't be common.

Either unreliable or (not?) so often used...
 
  • #87
ThomasT said:
The fact that we continue to behave inductively can, at the level of our behavior, be understood as being due to the fact that it usually works.
worked, in the past.

Induction reasons from observed to unobserved.
You are reasoning from observed instances, in the past, where induction worked, to as yet unobserved instances in the future.
You're using induction to justify your belief in induction.

We can treat induction as an assumption, or as a method of reasoning, but it's more generally a basic behavioral characteristic. We behave inductively. It's part of our common behavioral heritage, our common sense.
Sure, we behave irrationally all the time.
Using induction involves an unjustified assumption.
That doesn't mean, once we make the assumption we can't proceed rationally.
As Hume said, induction is mere habit.

We can, of course, use it rationally, but we can't justify its usage.
 
  • #88
ThomasT said:
Using the standard referents for emitter, polarizer, and detector, in a simple optical Bell test setup involving emitter, 2 polarizers, and 2 detectors it's pretty easy to demonstrate that the polarizer settings aren't determined by the detector settings, or by the emitter, or by anything else in the design protocol except "whatever is used to change" the polarizer settings.

That may be true if you only look at the macroscopic description of the detector/emitter/etc. We do not know if the motion of particles in these objects exert an influence on the motion of particles in other objects because we only have a statistical description. The individual trajectories might be correlated even if the average force between macroscopic objects remains null.

It's been demonstrated that the method that's used to change the polarizer settings, and whether it's a randomized process or not, isn't important wrt joint detection rate. What is important is the settings that are associated with the detection attributes via the pairing process -- not how the settings themselves were generated.

I agree but this is irrelevant. Nothing has been demonstrated regarding the individual results obtained.

It's already well established that detector orientations don't trigger emissions

How?

-- and changing the settings while the emissions are in flight has no observable effect on the correlations.

I wouldn't expect that anyway.

If you want to say that these in-flight changes are having some (hidden) effect, then either there are some sort of nonlocal hidden variable(s) involved, or, as you suggest, there's some sort of heretofor unknown, and undetectable, local field that's determining the correlations. Your suggestion seems as contrived as the nonlocal models -- as well as somewhat incoherent wrt what's already known (ie., wrt working models).

The field is the "hidden variable", together with particles' positions, so, in this sense, is not known. However, if this field determines particles' motions it has to appear, on a statistical level as the EM, weak, color field. The question is if such a field can be formulated. Unfortunately I cannot do it myself as I don't have the required skills but I wonder if it could be achieved, or it is mathematically impossible. Now, in the absence of a mathematical formulation it is premature to say that it must be contrived, or that the hypothesis is not falsifiable.

I still don't know what the distinguishing characteristics of a superdeterministic theory are. Can you give a general definition of superdeterminism that differentiates it from determinism? If not, then you're OP is just asking for (conclusive or definitive) arguments against the assumption of determinism. There aren't any. So, the assumption that the deep reality of Nature is deterministic remains the defacto standard assumption underlying all physical science. It isn't mentioned simply because it doesn't have to be. It's generally taken for granted -- not dismissed.

What I am interested is local determinism, as non-local determinism is clearly possible (Bohm's interpretation). There are many voices that say that it has been ruled out by EPR experiments. I am interested in seeing if any strong arguments can be put forward against the use of SD loophole (denying the statistical independence between emitter and detector) to reestablish local deterministic theories as a reasonable research object.
 
  • #89
ueit said:
What I am interested is local determinism, as non-local determinism is clearly possible (Bohm's interpretation). There are many voices that say that it has been ruled out by EPR experiments. I am interested in seeing if any strong arguments can be put forward against the use of SD loophole (denying the statistical independence between emitter and detector) to reestablish local deterministic theories as a reasonable research object.

ueit,

Statistics are calculated mathematically from individual measurements. They are aggregate observations about likelihoods. Determinism deals with absolute rational causation. It would be an inductive fallacy to say that statistics can tell us anything about the basic mechanisms of natural causation.

Linguistically, we may use statistics as 2nd order explanations in statements of cause and effect, but it is understood that statistical explanations never represent true causation. If determinism exists, there must necessarily be some independent, sufficient, underlying cause - some mechanism of causation.

Because of the problem of induction, no irreducible superdeterministic explanation can prove anything about the first order causes and effects that basic physics is concerned with.

The very concept of locality necessarily implies classical, billiard-ball style, momentum transfer causation. The experiments of quantum mechanics have conclusively falsified this model.
 
Last edited:
  • #90
ThomasT said:
Not disproven, but supplanted by qm wrt certain applications. Classical physics is used in a wide variety of applications. Sometimes, in semi-classical accounts, part of a system is treated classically and the other part quantum mechanically.

ThomasT, I can treat my sister as if she's my aunt. That doesn't make it true :). Local causation stemming from real classical particles and waves has been falsified by experiments. EPRB type experiments are particularly illustrative of this fact.

Yes, it can't be proven. Just reinforced by observations. Our windows on the underlying reality of our universe are small and very foggy. However, what is known suggests that the deep reality is deterministic and locally causal.

If there is evidence of deep reality being deterministic, I would like to know what it is :). As for the universe being locally deterministic, this has been proven impossible. See above.

Induction is justified by its continued practical utility. A general understanding for why induction works at all begins with assumption of determinism.

So we're supporting determinism by assuming determinism?

As you indicated earlier, we can't actually look at the quantum level, but the assumption of determinism is kept because there is evidence that there are quantum-scale mechanisms for physical causation.

If the evidence is inductive, then since you claim induction relies on an assumption of determinism itself, there is no evidence at all. I'm not denying the idea that there could be evidence for basic determinism, but the only evidence I've seen proposed here so far has been ethical. It has been assumptions about what we should believe and what's practical, rather than what we can know or what's true.
 
Last edited:

Similar threads

Replies
119
Views
3K
Replies
25
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 75 ·
3
Replies
75
Views
11K
  • · Replies 40 ·
2
Replies
40
Views
2K
  • · Replies 190 ·
7
Replies
190
Views
15K
  • · Replies 54 ·
2
Replies
54
Views
6K
Replies
58
Views
5K
  • · Replies 47 ·
2
Replies
47
Views
5K
  • · Replies 1 ·
Replies
1
Views
1K