Arguments Against Superdeterminism

  • Thread starter Thread starter ueit
  • Start date Start date
  • Tags Tags
    Superdeterminism
AI Thread Summary
Superdeterminism (SD) challenges the statistical independence assumed in quantum mechanics, particularly in the context of Bell's Theorem, suggesting that all events, including human decisions, are predetermined. This theory is often dismissed in scientific discussions, with calls for clearer arguments against it. Critics argue that SD implies a lack of free will, raising questions about the origins of human creativity and technological advancements, such as cell phones and colliders. The conversation also touches on the philosophical implications of determinism, questioning the nature of existence and the illusion of self. Ultimately, the discussion highlights the need for a comprehensive theory that reconciles quantum and classical behaviors while addressing the implications of determinism.
  • #51
Doc Al said:
I stated up front that it's "possible" (meaning: not immediately self-contradictory), as I think Bell did as well. So what? It's also "possible" that you (and all of PF) are just a figment of my imagination.

You have not provided or described any mechanism. What experiment would you propose to falsify your proposed "mechanism"? To get anywhere, you need a specific physical mechanism.

In order to falsify the mechanism one should propose a clear mathematical structure. I am not able to propose it. But if you want a "cheap" example of a SD theory just take Bohm's interpretation, as it is, and replace in the equation the instantaneous, "present" particle distribution with a past distribution so that locality is observed. The "present" distribution is then "predicted" by the particle from the past one.

It would be more interesting to find a formulation that is not such obviously ad-hoc, but for the time being I was only interested if there are well formulated arguments against SD.
 
Physics news on Phys.org
  • #52
SW VandeCarr said:
If arguments against SD are wanted, it seems one the most obvious is that it negates the concepts of entropy and information. SD would imply the entropy of any system anywhere at any time is zero since there is never any objective uncertainty as to a system's state or evolution. Then we are dealing with subjective uncertainty only. However subjective uncertainty would also be predetermined, as our "subjective" state is also a function of a system with zero entropy. (Note: I'm using 'subjective' and 'objective' as if there were a fundamental difference. However,I agree with others here that our notion of 'objectivity' is related to issues of consistency and clarity of descriptions).

EDIT: What happens to "the arrow of time" if thermodynamic entropy is always zero?

Are you saying that a classical gas, composed of molecules with strict deterministic behavior would not obey the laws of thermodynamics?
 
  • #53
ueit said:
Are you saying that a classical gas, composed of molecules with strict deterministic behavior would not obey the laws of thermodynamics?

I'm making a reductio ad absurdum argument. If SD is true, then all events occur with probability one. If you plug p=1 into the Shannon equation (which differs from the Boltzmann entropy equation only by the choice of the constant) you get zero entropy. A block universe under SD (as I understand SD) is completely defined at all space-time points. Therefore its entropy would be zero everywhere all the time. The universe exists in just one possible state.

http://en.wikipedia.org/wiki/Boltzmann_entropy

EDIT:http://en.wikipedia.org/wiki/Entropy_(statistical_thermodynamics)

I'm posting these links, not because I don't think most people on this thread know what I'm talking about, but because some might not, and I just want to be specific regarding what I'm talking about.
 
Last edited:
  • #54
SW VandeCarr said:
I'm making a reductio ad absurdum argument. If SD is true, then all events occur with probability one. If you plug p=1 into the Shannon equation (which differs from the Boltzmann entropy equation only by the choice of the constant) you get zero entropy. A block universe under SD (as I understand SD) is completely defined at all space-time points. Therefore its entropy would be zero everywhere all the time. The universe exists in just one possible state.

http://en.wikipedia.org/wiki/Boltzmann_entropy

This is the fine grained entropy, not the coarse grained entropy used in thermodynamics. The fine grained entropy is always zero, even in "ordinary physics".
 
  • #55
Count Iblis said:
This is the fine grained entropy, not the coarse grained entropy used in thermodynamics. The fine grained entropy is always zero, even in "ordinary physics".

Just a short response for now. What kind of entropy does the equation I cited (S_{B}) describe? If it's the "fine grained" entropy and it's always zero, what good is it?
 
Last edited:
  • #56
ueit said:
By "detector" I mean the device or group of devices that measure the spin. It might be polarizer + photon detector, or a Stern-Gerlach device or something else. What is important is that the "detector" also includes whatever is used to change its orientation (be it an electric engine, a human, a monkey pressing a button, etc.). Everything that has a contribution to the decision regarding the measurement axis is included in the generic name of "detector".
Using the standard referents for emitter, polarizer, and detector, in a simple optical Bell test setup involving emitter, 2 polarizers, and 2 detectors it's pretty easy to demonstrate that the polarizer settings aren't determined by the detector settings, or by the emitter, or by anything else in the design protocol except "whatever is used to change" the polarizer settings.

It's been demonstrated that the method that's used to change the polarizer settings, and whether it's a randomized process or not, isn't important wrt joint detection rate. What is important is the settings that are associated with the detection attributes via the pairing process -- not how the settings themselves were generated.

ueit said:
As I have said, "a particle is only emitted when the detectors' field has a certain, "favorable", value, corresponding to a certain detector configuration." Because the evolution of the detector is deterministic, its future orientation is "fixed". The "change" while the particle is in flight is nothing but the detector's deterministic evolution which is "known" by the particle since emission. In other words, you cannot "fool" the particle. The particle "knows" what will happen because it knows the value of the field in the past + deterministic evolution law.
It's already well established that detector orientations don't trigger emissions -- and changing the settings while the emissions are in flight has no observable effect on the correlations. If you want to say that these in-flight changes are having some (hidden) effect, then either there are some sort of nonlocal hidden variable(s) involved, or, as you suggest, there's some sort of heretofor unknown, and undetectable, local field that's determining the correlations. Your suggestion seems as contrived as the nonlocal models -- as well as somewhat incoherent wrt what's already known (ie., wrt working models). Anyway, flesh out the details of it, submit it to the appropriate forum, and maybe somebody who knows more than I do will agree with your approach.

I still don't know what the distinguishing characteristics of a superdeterministic theory are. Can you give a general definition of superdeterminism that differentiates it from determinism? If not, then you're OP is just asking for (conclusive or definitive) arguments against the assumption of determinism. There aren't any. So, the assumption that the deep reality of Nature is deterministic remains the defacto standard assumption underlying all physical science. It isn't mentioned simply because it doesn't have to be. It's generally taken for granted -- not dismissed.
 
Last edited:
  • #57
ThomasT said:
I still don't know what the distinguishing characteristics of a superdeterministic theory are. Can you give a general definition of superdeterminism that differentiates it from determinism? If not, then you're OP is just asking for (conclusive or definitive) arguments against the assumption of determinism. There aren't any. So, the assumption that the deep reality of Nature is deterministic remains the defacto standard assumption underlying all physical science. It isn't mentioned simply because it doesn't have to be. It's generally taken for granted -- not dismissed.

I don't know how SD is defined either. Perhaps you can address my concerns stated in post 53. I'm basing my argument on Boltzmann entropy.
 
Last edited:
  • #58
SW VandeCarr said:
I don't know how SD is defined either. Perhaps you can address my concerns stated in post 53. I'm basing my argument on Boltzmann entropy.
Since we're not sure exactly what ueit means by superdeterminism, let's assume for the moment that it's just an emphatic form of the term determinism (equivalent to saying that Nature is absolutely deterministic or really really deterministic, which is equivalent to the standard meaning of determinism).

SW VandeCarr said:
If arguments against SD are wanted, it seems one the most obvious is that it negates the concepts of entropy and information.
Determinism is an assumption about the underlying nature of the reality that we observe. It's a given as far as physical science's search for fundamental, as well as emergent, dynamical laws.

It isn't obvious to me, and I don't understandl, how the assumption of determinism "negates the concepts of entropy and information".

As I've mentioned, afaik, there are no (and I don't think there can be any) definitive or conclusive arguments against determinism. Entropy might be irrelevant for this, and our information seems to support the assumption of determinism.

Maybe I just don't understand your argument. So, if you could lay it out, step by step, that might help.
 
  • #59
SW VandeCarr said:
I don't know how SD is defined either. Perhaps you can address my concerns stated in post 53. I'm basing my argument on Boltzmann entropy.

As ThomasT says, there is no difference between superdeterminism and determinism. However, classical deterministic theories can work with free, external parameters because either there is no long-range field (billiard balls) either the field vanishes with distance (classical EM and gravitational fields). The distinctive characteristic of SD (as I propose it) is that such a separation observer-observed is not possible because the interaction between distant particles never becomes negligible. It is a quantitative, not qualitative difference. The closest analogous would be a system (galaxy) of black holes in general relativity. In order to model the trajectory of one BH you need to know the distribution of all other BH-s.

Regarding your argument, AFAIK a reversible system has a constant entropy. If our universe is SD then it has a constant entropy. However we cannot measure the entropy of the universe, only of a part of it. But this part is not reversible because the interactions with the environment are not taken into account, therefore the entropy may increase.
 
  • #60
ThomasT said:
Maybe I just don't understand your argument. So, if you could lay it out, step by step, that might help.

The basic argument is fairly straightforward. Entropy, both in the Shannon IT context and in the Gibbs thermodynamic context, is a logarithmic function of probabilities The Gibbs entropy equation:

S=-k\sum_{i}p_{i}lnp_{i}

The probabilities are based on the number of states in which a system can exist. For example a sequence of ten fair coin tosses has 1024 possible states, each state having an equal probability of 1/1024. In thermodynamics, we're talking about the macrostate of a system as a composition of N microstates. The model of the evolution of a thermodynamic system is a Markov process involving the transition probabilities from one state to another.

Given this background, if you assume SD, I'm arguing that there is just one state in which a system can exist at any point in time. The whole notion of a probability becomes irrelevant as an objective concept. Entropy is a function of the number of possible states in which a system can exist. If there is only one state, entropy is equal to zero. Moreover, since the evolution of a system is strictly determined, there is no Markov process.

In the coin toss example, the ten toss sequence is predetermined. There is only one possible outcome before the coin is actually tossed. At best we an only talk about subjective entropy: that is, our uncertainty as to the outcome. This is what SD says. Now, if you take this one step further and ask what subjective uncertainty actually is, it's a state of a system, the system being our brain and the state of knowledge represented in the brain. That's my reductio ad absurdum. We only imagine there are multiple possible outcomes. In reality (under SD) there is only one possible outcome and probabilities have no objective meaning. How can we talk about the entropy of the universe increasing when there is no objective entropy?
 
Last edited:
  • #61
ueit said:
As ThomasT says, there is no difference between superdeterminism and determinism.

Regarding your argument, AFAIK a reversible system has a constant entropy. If our universe is SD then it has a constant entropy. However we cannot measure the entropy of the universe, only of a part of it. But this part is not reversible because the interactions with the environment are not taken into account, therefore the entropy may increase.

I saw your post after I posted my last post quoting ThomasT. If you concede that the entropy of the universe is constant under SD, what does that say about the Second Law? As far as local environments, I'm not sure your argument rescues the Second Law. Local entropy may increase, decrease, or remain constant with the background of a constant entropy universe. In any case, entropy is imaginary if there are no objective probabilities. IMHO, you can have SD or the Second Law, but not both.
 
Last edited:
  • #62
SW VandeCarr said:
Just a short response for now. What kind of entropy does the equation I cited (S_{B}) describe? If it's the "fine grained" entropy and it's always zero, what good is it?


That formula is completely general and can describe any kind of enetropy. To get the entropy we use in practice, you always have to use a coarse graining procedure to define the probabilities.

If you have a given number number of molecules in a given volume and you would exactly specify the energy of the system, then there is only one quantum state the system vcan be in. So, what you do is you explicitely specify some small energy uncertainty delta E and then count the number of microstates that are within that small energy nterval. Then, the fundamental assumption being that all these microstates are equally likely, yields the Boltzmann formula:

S = k Log(Omega)

where Omega is the number of microstates.


Check out http://en.wikipedia.org/wiki/Fundamental_thermodynamic_relation#Derivation_from_first_principles"

dS = dQ/T

from S = k Log(Omega)


to see the importance of specifying the delta E.

So, in the end we arrive at:

dS = dE/T + X dx/T

where X is the generalized force and x and external parameter. And for a reversible change we have dE + X dx = dQ
 
Last edited by a moderator:
  • #63
ueit said:
In order to falsify the mechanism one should propose a clear mathematical structure. I am not able to propose it. But if you want a "cheap" example of a SD theory just take Bohm's interpretation, as it is, and replace in the equation the instantaneous, "present" particle distribution with a past distribution so that locality is observed. The "present" distribution is then "predicted" by the particle from the past one.

It would be more interesting to find a formulation that is not such obviously ad-hoc, but for the time being I was only interested if there are well formulated arguments against SD.
Lets push your model a bit deeper and see if it survives. We know from QCD, that more than 99% of all mass of matter is concentrated in the nuclei and the mass of quarks only adds up to a few percent. The rest of the mass(>96%) comes from virtual gluons that randomly pop into existence and disappear again from the quantum vacuum. The Higgs field is theorized to make up the other few percent and give mass to quarks and electrons through virtual Higgs bosons, and it is thought to derive its energy from the quantum vacuum too. So it appears we are very close to having evidence that all of physical matter in space-time emerges from timeless and spaceless Planck scale through random quantum fluctuations. This may resolve the biggest of all questions - "Why is there something instead of nothing?" through adjustments to both how we view "something" and how we view "nothing". But one could wonder - are quantum fluctuations really random if they have the ability to create such immense volumes of information at our macro scale(the whole structure of reality as we see it)? Only infinity and the notion that given infinity, everything that can occur will occur in the quantum vacuum can provide a somewhat coherent explanation. What's your opinion?
 
Last edited:
  • #64
Count Iblis said:
That formula is completely general and can describe any kind of enetropy. To get the entropy we use in practice, you always have to use a coarse graining procedure to define the probabilities.

Thanks for clarifying that. Given that the Gibbs formula is good for both fine and coarse grained entropy, it would seem that SD would restrict the Second Law to specific experiments at most, but that the Second Law is not a universal principle. Therefore, with SD, we cannot explain the arrow of time in terms of the Second Law, nor can we justify entropy as an objective concept. (see my previous posts.) SD may even mean that we have to give up the idea of randomness at all scales, even random quantum fluctuations at Planck scales.
 
Last edited:
  • #65
SW VandeCarr, it seems to me that probabilistic descriptions and the concepts of entropy and information are independent from the assumption of determinism.

It's assumed that the underlying evolution of any physical system is deterministic. This assumption is objective in the sense that, and insofar as, it's inferrable from the development of dynamical laws and principles that correspond with the objective data.

However, the assumption of determinism doesn't inform us about the details of the underlying (real) state or evolution of any physical system. Even though it's assumed that any physical system can be in only one (real) state at any given time, those (real) states are generally unknown and the behavior of observable objects is increasingly upredictable as the dependence of the behavior on unknown factors increases. So, probabilistic descriptions are often necessary, and their use doesn't contradict the underlying assumption of determinism.

You wrote:
We only imagine there are multiple possible outcomes. In reality (under SD) there is only one possible outcome and probabilities have no objective meaning.
If the underlying states and dynamics were known, then probabilistic descriptions would be obviated. But they aren't.

However, this doesn't mean that probabilistic descriptions aren't objective. It isn't our imaginations that tell us that the tossed coin is going to come up either heads or tails.

How can we talk about the entropy of the universe increasing when there is no objective entropy?
I don't know quite how to think about the entropy of the universe. Is there a definitive statement about the entropy of the universe? I've seen several different values given, none of which are 0.

In any case, entropy is connected with the dissipation of energy and the arrow of time -- the archetypal example of which is the radiative arrow. Drop a pebble into a smooth pool of water. The evolution of the wavefront is deterministic, isn't it?

I don't think your argument is why ueit's proposal should be rejected. There are other reasons, not the least of which is the notion of fields whose strength doesn't diminish with distance.
 
  • #66
SW VandeCarr said:
Therefore, with SD, we cannot explain the arrow of time in terms of the Second Law, nor can we justify entropy as an objective concept. (see my previous posts.) SD may even mean that we have to give up the idea of randomness at all scales, even random quantum fluctuations at Planck scales.
The Second Law doesn't explain the arrow of time. It's just a generalization of it. Since observations are so far in agreement with it, it's kept.

Entropy, in its many forms, is very much an objective concept insofar as it depends on measurements.

The assumption of determinism isn't at odds with randomness. Randomness refers to unpredictability. We use words like random and spontaneous when we can't specify causal antecedents. This doesn't mean that there aren't any.
 
  • #67
ThomasT said:
However, this doesn't mean that probabilistic descriptions aren't objective. It isn't our imaginations that tell us that the tossed coin is going to come up either heads or tails.

I don't know quite how to think about the entropy of the universe. Is there a definitive statement about the entropy of the universe? I've seen several different values given, none of which are 0.

The entropy of the universe, whatever it might be, is definitely not 0. ueit has already agreed that SD implies a constant entropy for the universe and if the universe is in just one possible spacetime state (block universe), all events occur with a real probability of one, which yields zero when plugged into the Gibbs equation for entropy. This is an argument against SD.

Regarding probabilities, if the uncertainty is due only to a lack of complete information, the probabilities are not objective. They would be objective only if we assume that nature is fundamentally probabilistic and true randomness actually exists.
 
Last edited:
  • #68
SW VandeCarr said:
ueit has already agreed that SD implies a constant entropy for the universe ...
He's also agreed that SD is synonymous with standard determinism. There's a better name for what he's proposing, which I'll suggest to him. Anyway, determinism doesn't imply a constant entropy.

SW VandeCarr said:
... and if the universe is in just one possible spacetime state (block universe), all events occur with a real probability of one, which yields zero when plugged into the Gibbs equation for entropy. This is an argument against SD.
I don't think the block universe model should be taken literally, as a realistic representation. The universe is assumed to be in one possible, transitory, spatial configuration at any given time wrt evolutionary (deterministic), presentist models.

Saying that all events occur with a real probability of one is meaningless. Probabilities are applicable before, not after, the facts of observation.

SW VandeCarr said:
Regarding probabilities, if the uncertainty is due only to a lack of complete information, the probabilities are not objective.
If we had complete information we wouldn't need probabilities. What is non objective about the observation that a tossed coin will come up either heads or tails?

SW VandeCarr said:
They would be objective only if we assumed that nature was fundamentally probabilistic and true randomness actually existed.
No. Probabilities are objective when they're based on observable possibilities. Randomness refers to our observations, not the deep reality of Nature. True randomness does exist. There are lots of things that we really can't predict. :smile:

Why would we assume that Nature is fundamentally probabilistic when there are so many reasons to believe that it isn't?
 
  • #69
ThomasT said:
Why would we assume that Nature is fundamentally probabilistic when there are so many reasons to believe that it isn't?

I'm not assuming anything. I don't know. I'm saying if...then. Given determinism, the future is as well determined as the past. We just don't know for certain what it will be. Therefore, it's our uncertainty that probabilities quantify.
 
  • #70
SW VandeCarr said:
I'm not assuming anything. I don't know. I'm saying if...then. Given determinism, the future is as well determined as the past. We just don't know for certain what it will be. Therefore, it's our uncertainty that probabilities quantify.
Yes, it's our uncertanties that probabilities quantfy. We assume an underlying determinism (for many good reasons). But we don't know the details of that underlying determinism. Hence, the need for probabilistic descriptiions.

ueit asks for arguments against determinism. Afaik, there aren't any -- at least no definitive ones. And, as far as I can tell from this thread you haven't given any.

But, ueit's proposed explanation for Bell-type correlations is problematic for reasons that I've given.

So, where are we?
 
  • #71
ThomasT said:
Yes, it's our uncertanties that probabilities quantfy. We assume an underlying determinism (for many good reasons). But we don't know the details of that underlying determinism. Hence, the need for probabilistic descriptiions.

ueit asks for arguments against determinism. Afaik, there aren't any -- at least no definitive ones. And, as far as I can tell from this thread you haven't given any.

But, ueit's proposed explanation for Bell-type correlations is problematic for reasons that I've given.

So, where are we?

I think you're trying to have it both ways. First, just to be clear, we need to quantify uncertainty. Uncertainty is a function of a probability: U=4(p(1-p)) where 4 just scales the measure to the interval [0,1]. It's clear uncertainty is maximal when p=0.5 and 0 when p=0
or p=1.

Now you've already agreed that probability measures our uncertainty. What does our uncertainty have to do with nature? Someone tosses a fair coin behind a curtain. The coin is tossed but you don't see it. For the "tosser" uncertainty is 0. For you, it's 1.

Now we have deterministic theories that are not time dependent. The laws of physics are presumed to hold in the future just as in the past. The charge of an electron does not change with time. If we have determinism (D), it's clear that any outcome of a time dependent process is also entirely predictable in principle. That means randomness is only a reflection of our current state of knowledge. If you could have perfect information as to some future time dependent outcome, you have U=0. This corresponds to p=1 or p=0. This is what I meant when I said that under D, with perfect information, future events occur with a real probability 1 (or 0). In effect under D, we don't have probabilities in nature. We only have our uncertainty about nature.
 
  • #72
SW VandeCarr said:
I think you're trying to have it both ways.
Trying to have what both ways?

SW VandeCarr said:
First, just to be clear, we need to quantify uncertainty. Uncertainty is a function of a probability: U=4(p(1-p)) where 4 just scales the measure to the interval [0,1]. It's clear uncertainty is maximal when p=0.5 and 0 when p=0 or p=1.
The probability already quantifies the uncertainty, doesn't it?

SW VandeCarr said:
Now you've already agreed that probability measures our uncertainty.
I agreed that probabilities are quantitative expressions of our uncertainties.

SW VandeCarr said:
What does our uncertainty have to do with nature?
That's what I was wondering when you said ...
SW VandeCarr said:
They (probabilities) would be objective only if we assume that nature is fundamentally probabilistic and true randomness actually exists.
... and I pointed out that probabilities are objective when they're based on observable possibilities, and that randomness refers to our observations, not the underlying reality.

SW VandeCarr said:
Someone tosses a fair coin behind a curtain. The coin is tossed but you don't see it. For the "tosser" uncertainty is 0. For you, it's 1.
OK.

SW VandeCarr said:
Now we have deterministic theories that are not time dependent. The laws of physics are presumed to hold in the future just as in the past. The charge of an electron does not change with time. If we have determinism (D), it's clear that any outcome of a time dependent process is also entirely predictable in principle.
OK, until we get to quantum stuff where that pesky quantum of action becomes significant.

SW VandeCarr said:
That means randomness is only a reflection of our current state of knowledge.
I agree.

SW VandeCarr said:
If you could have perfect information as to some future time dependent outcome, you have U=0. This corresponds to p=1 or p=0. This is what I meant when I said that under D, with perfect information, future events occur with a real probability 1 (or 0).
Of course, an event either has occurred or it hasn't.

SW VandeCarr said:
In effect under D, we don't have probabilities in nature. We only have our uncertainty about nature.
Ok, so I guess we agree on this. So far, wrt ueit's OP we don't have any good argument(s) against the assumption of determinism.

What about a field associated with filtering-measuring devices in optical Bell tests, whose strength doesn't diminish with distance, and whose values determine or trigger emissions? (Lets call it the Ueit Field.) Any arguments against that?
 
  • #73
ThomasT said:
So far, wrt ueit's OP we don't have any good argument(s) against the assumption of determinism.

Classical objective causation, the Newtonian billiard-ball style particle collision causation, has been disproven by quantum mechanical experiments. Historically, this conception of classical physical causation is what was meant by determinism in nature. If that's what we mean now (but I don't think it is) then the conversation is over.

Quantum mechanics also suggests that there are forces in the universe that are inherently unknowable. We can only directly measure the location of a particle on our detector, we can't directly measure the quantum forces that caused it to end up at that specific location instead of another. If this aspect of QM is true, then deterministic natural laws, if they exist, are inherently unknowable themselves (and you can further argue whether or not an unfalsifiable physical construct can in principle be considered to exist at all).

Of course, the problem of induction already disallows for proof of physical determinism.

Spinoza and others have argued that physical determinism is rationally guaranteed by the laws of logic. Arguments from logic are the only type that have any hope of proving determinism in nature.

On a practical level it is necessary to assume determinism in nature, and on a macro level things seem to be pretty deterministic. However, on the quantum level where this causation is supposed to be taking place, we have absolutely no idea how anything actually works. We cannot visualize any sort of causation on the quantum level, and we cannot come close to predicting any results.

An assumption of determinism stems from a macroscopic point of view that presupposes a quantum-scale mechanism for physical causation. However, when you actually look at the quantum level, no evidence for any such mechanism can be found, and it has been argued that no such mechanism could exist at all.
 
  • #74
ThomasT said:
The probability already quantifies the uncertainty, doesn't it?

Not really. You approach certainty as as you move toward p=0 or p=1 from p=0.5. Uncertainty is maximal at p=0.5 Look at the equation.

OK, until we get to quantum stuff where that pesky quantum of action becomes significant.

Well, D doesn't exclude anything. It would seem you're going to have to accept hidden variables if you want D or settle for a universe that is truly random at it's smallest scales. That's what I mean by trying to having it both ways.

What about a field associated with filtering-measuring devices in optical Bell tests, whose strength doesn't diminish with distance, and whose values determine or trigger emissions? (Lets call it the Ueit Field.) Any arguments against that?

I'm not a physicist and I don't feel I'm qualified to venture an opinion on that. I feel I am qualified enough in statistics and probability to venture an opinion in this area (in a frequentist or Bayesian sense) both in the methods and the limitations of these methods.
I did just offer a tentative opinion re hidden variables. You can tell me whether I'm correct or not
 
  • #75
Oh, right, this thread is on superdeterminism, not determinism :).

As for superdeterminism... the only way it can be logically explained is if one of your causes is superfluous. Superdeterminism is: A and B each independently caused C.

So, what caused C? Answer: A or B, take your pick.

Logically this is impossible unless A and B are actually the same thing but are being described two separate ways. In essence, superdeterminism is just straight determinism with the addition of superfluous 2nd order explanations.

Mental activity is often ascribed such 2nd order explanatory power over physical events. But the fact that physical events (assuming we have deterministic natural laws) can be explained entirely in physical terms doesn't prove that logic is being violated and superdeterminism is right, it just proves that mental activity at some level must be reducible to physical activity (ie our brains).

Bottom line: superdeterminism violates logic. It's logically impossible in its true sense. Additionally, 2nd order explanations from properties emergent on 1st order explanations are interesting, but they don't solve any problems in basic physics, which is only concerned with 1st order causation.
 
Last edited:
  • #76
kote said:
Oh, right, this thread is on superdeterminism, not determinism :).

The OP has agreed (as per ThomasT) that SD and D are the same.
 
  • #77
SW VandeCarr said:
You approach certainty as as you move toward p=0 or p=1 from p=0.5. Uncertainty is maximal at p=0.5.
That seems pretty quantitative to me. :smile: Your equation just quantifies it in a different way.

SW VandeCarr said:
Well, D doesn't exclude anything. It would seem you're going to have to accept hidden variables if you want D or settle for a universe that is truly random at it's smallest scales. That's what I mean by trying to having it both ways.
Yes, of course, the presumed existence of hidden variables comes with the assumption of determinism. I don't think anybody would deny that hidden variables exist.

Assuming determinism, then randomness exists only at the instrumental level, the level of our sensory apprehension.
 
  • #78
kote said:
Oh, right, this thread is on superdeterminism, not determinism :).

As for superdeterminism... the only way it can be logically explained is if one of your causes is superfluous. Superdeterminism is: A and B each independently caused C.

So, what caused C? Answer: A or B, take your pick.

Logically this is impossible unless A and B are actually the same thing but are being described two separate ways. In essence, superdeterminism is just straight determinism with the addition of superfluous 2nd order explanations.

Mental activity is often ascribed such 2nd order explanatory power over physical events. But the fact that physical events (assuming we have deterministic natural laws) can be explained entirely in physical terms doesn't prove that logic is being violated and superdeterminism is right, it just proves that mental activity at some level must be reducible to physical activity (ie our brains).

Bottom line: superdeterminism violates logic. It's logically impossible in its true sense. Additionally, 2nd order explanations from properties emergent on 1st order explanations are interesting, but they don't solve any problems in basic physics, which is only concerned with 1st order causation.

Is this the standard definition of superdeterminism?
 
  • #79
ThomasT said:
Is this the standard definition of superdeterminism?

It's slightly more nuanced, but basically, yes. Superdeterminism is a second level of causation on top of one cause per effect causation. It is not necessarily true that the second level is reducible to the first level in any of its individual parts, but as a system, the second level of explanation (cause) emerges fully from the first level. Additionally, in superdeterminism, the first order explanation is always necessary and sufficient on its own.

For example, you may not find the sensation of pain anywhere in the micro level atomic world, but the sensation of pain may still emerge as a unique property from the correct systemic arrangement of atoms in my brain along with corresponding atomic impulses in my nerves etc.

This second order of properties that emerges from the first order properties of the atoms can be used in causal statements. "The pain caused me to pull my hand back from the fire." This, however, is a "super-" cause, emerging from the first order cause (or explanation) involving atoms bumping into each other according to the laws of nature at the micro level.

Edit: Now that I think about it, I may have put a little too much interpretation in my explanation. Superdeterminism just means events can have more than one immediate and sufficient cause. However, I think it's pretty trivial to conclude that this is logically impossible unless all causes are in essence the same, in which case you get what I just described.
 
Last edited:
  • #80
kote said:
It's slightly more nuanced, but basically, yes. Superdeterminism is a second level of causation on top of one cause per effect causation. It is not necessarily true that the second level is reducible to the first level in any of its individual parts, but as a system, the second level of explanation (cause) emerges fully from the first level. Additionally, in superdeterminism, the first order explanation is always necessary and sufficient on its own.

For example, you may not find the sensation of pain anywhere in the micro level atomic world, but the sensation of pain may still emerge as a unique property from the correct systemic arrangement of atoms in my brain along with corresponding atomic impulses in my nerves etc.

This second order of properties that emerges from the first order properties of the atoms can be used in causal statements. "The pain caused me to pull my hand back from the fire." This, however, is a "super-" cause, emerging from the first order cause (or explanation) involving atoms bumping into each other according to the laws of nature at the micro level.

Edit: Now that I think about it, I may have put a little too much interpretation in my explanation. Superdeterminism just means events can have more than one immediate and sufficient cause. However, I think it's pretty trivial to conclude that this is logically impossible unless all causes are in essence the same, in which case you get what I just described.

So superdeterminism just refers to the principles and laws governing higher order physical regimes?
 
  • #81
ThomasT said:
So superdeterminism just refers to the principles and laws governing higher order physical regimes?

It doesn't necessarily have to do with the physical realm at all. Perhaps everything is in our minds and mental causes and effects are basic to reality. We can even talk about causation in a made up universe with its own rules. Superdeterminism is about logical laws of causation. It's a mode of explanation positing overcausation.

Superdeterminism: One effect can have multiple immediate and sufficient causes.

...but read above for some of how this can and can't work :)
 
  • #82
kote, I think you've lost me, so I'm going to go back to your post #73 (on determinism) and work my way to your last post on superdeterminism, nitpicking as I go.

kote said:
Classical objective causation, the Newtonian billiard-ball style particle collision causation, has been disproven by quantum mechanical experiments.
Not disproven, but supplanted by qm wrt certain applications. Classical physics is used in a wide variety of applications. Sometimes, in semi-classical accounts, part of a system is treated classically and the other part quantum mechanically.

kote said:
Historically, this conception of classical physical causation is what was meant by determinism in nature. If that's what we mean now (but I don't think it is) then the conversation is over.
I think that's pretty much what's meant -- along the lines of an underlying deterministic wave mechanics. But the conversation isn't over, because this view is reinforced by qm and experiments, not disproven.

kote said:
Quantum mechanics also suggests that there are forces in the universe that are inherently unknowable. We can only directly measure the location of a particle on our detector, we can't directly measure the quantum forces that caused it to end up at that specific location instead of another. If this aspect of QM is true, then deterministic natural laws, if they exist, are inherently unknowable themselves (and you can further argue whether or not an unfalsifiable physical construct can in principle be considered to exist at all).
Yes, quantum theory, at least wrt the standard interpretation, does place limits on what we can know or unambiguously, objectively say about our universe. The assumption of determinism is just that, an assumption. It's based on what we do know about our universe. It places limits on the observations that might result from certain antecedent conditions. It's falsifiable in the sense that if something fantastically different from what we expect vis induction were to be observed, then our current notion of causal determinism would have to be trashed.

kote said:
Of course, the problem of induction already disallows for proof of physical determinism.
Yes, it can't be proven. Just reinforced by observations. Our windows on the underlying reality of our universe are small and very foggy. However, what is known suggests that the deep reality is deterministic and locally causal.

Induction is justified by its continued practical utility. A general understanding for why induction works at all begins with assumption of determinism.

kote said:
On a practical level it is necessary to assume determinism in nature, and on a macro level things seem to be pretty deterministic. However, on the quantum level where this causation is supposed to be taking place, we have absolutely no idea how anything actually works. We cannot visualize any sort of causation on the quantum level, and we cannot come close to predicting any results.
It's almost that bleak, but maybe not quite. There's some idea of how some things work. There are indications that the deep reality of our universe is essentially wave mechanical. But try efficiently modelling some process or other exclusively in those terms.

kote said:
An assumption of determinism stems from a macroscopic point of view that presupposes a quantum-scale mechanism for physical causation.
Ok.

kote said:
However, when you actually look at the quantum level, no evidence for any such mechanism can be found, and it has been argued that no such mechanism could exist at all.
As you indicated earlier, we can't actually look at the quantum level, but the assumption of determinism is kept because there is evidence that there are quantum-scale mechanisms for physical causation.
 
  • #83
ThomasT said:
Induction is justified by its continued practical utility.

That's 'justifying' induction via induction. Using induction may be practical, even unavoidable, but that sort of 'justification' reduces to a common-sense circular argument, nothing more. And common sense is notoriously unreliable.

Induction is only useful when it works, and its actually a classic example of an 'unjustified' assumption. Nothing wrong with that really, but it places limits on its utility.
 
  • #84
kote, before I begin nitpicking again, let me ask -- are you saying that superdeterminism refers to the fact that there are different levels of explanation, ie., either in terms of underlying dynamics or in terms of phenomena which emerge from those underlying dynamics?

That's what you seem to be saying by "superdeterminism: one effect can have multiple immediate and sufficient causes", and in your elaboration on that.

Or, are you saying that the higher order explanation might be referred to as superdeterministic?

If either, I don't think that that's what ueit meant to convey in using the term.

kote said:
In essence, superdeterminism is just straight determinism with the addition of superfluous 2nd order explanations.
A higher order explanation isn't necessarily superfluous, though it might, in a certain sense, be considered as such if there exists a viable lower order explanation for the same phenomenon.

In most cases, we have presumably higher order explanations in lieu of discovering lower order explanations that are presumed to exist vis the assumption of an underlying deterministic dynamics.

Anyway, since there are multiple levels of explanation, your statement above would seem to reduce to superdeterminism = determinism. Which is the conclusion we came to earlier in this thread. In other words, the term superdeterminism is superfluous.
 
  • #85
JoeDawg said:
That's 'justifying' induction via induction. Using induction may be practical, even unavoidable, but that sort of 'justification' reduces to a common-sense circular argument, nothing more.
We can begin to try to understand the deep reason for why induction works by positing that the underlying reality of our universe is causally deterministic, but that's just a vague precursor to a detailed answer to the question in terms of fundamental dynamics.

The fact that we continue to behave inductively can, at the level of our behavior, be understood as being due to the fact that it usually works.

JoeDawg said:
And common sense is notoriously unreliable.
If it were that unreliable, it wouldn't be common.

JoeDawg said:
Induction is only useful when it works ...
It usually does work.

JoeDawg said:
... and its actually a classic example of an 'unjustified' assumption. Nothing wrong with that really, but it places limits on its utility.
We can treat induction as an assumption, or as a method of reasoning, but it's more generally a basic behavioral characteristic. We behave inductively. It's part of our common behavioral heritage, our common sense.
 
  • #86
ThomasT said:
If it were that unreliable, it wouldn't be common.

Either unreliable or (not?) so often used...
 
  • #87
ThomasT said:
The fact that we continue to behave inductively can, at the level of our behavior, be understood as being due to the fact that it usually works.
worked, in the past.

Induction reasons from observed to unobserved.
You are reasoning from observed instances, in the past, where induction worked, to as yet unobserved instances in the future.
You're using induction to justify your belief in induction.

We can treat induction as an assumption, or as a method of reasoning, but it's more generally a basic behavioral characteristic. We behave inductively. It's part of our common behavioral heritage, our common sense.
Sure, we behave irrationally all the time.
Using induction involves an unjustified assumption.
That doesn't mean, once we make the assumption we can't proceed rationally.
As Hume said, induction is mere habit.

We can, of course, use it rationally, but we can't justify its usage.
 
  • #88
ThomasT said:
Using the standard referents for emitter, polarizer, and detector, in a simple optical Bell test setup involving emitter, 2 polarizers, and 2 detectors it's pretty easy to demonstrate that the polarizer settings aren't determined by the detector settings, or by the emitter, or by anything else in the design protocol except "whatever is used to change" the polarizer settings.

That may be true if you only look at the macroscopic description of the detector/emitter/etc. We do not know if the motion of particles in these objects exert an influence on the motion of particles in other objects because we only have a statistical description. The individual trajectories might be correlated even if the average force between macroscopic objects remains null.

It's been demonstrated that the method that's used to change the polarizer settings, and whether it's a randomized process or not, isn't important wrt joint detection rate. What is important is the settings that are associated with the detection attributes via the pairing process -- not how the settings themselves were generated.

I agree but this is irrelevant. Nothing has been demonstrated regarding the individual results obtained.

It's already well established that detector orientations don't trigger emissions

How?

-- and changing the settings while the emissions are in flight has no observable effect on the correlations.

I wouldn't expect that anyway.

If you want to say that these in-flight changes are having some (hidden) effect, then either there are some sort of nonlocal hidden variable(s) involved, or, as you suggest, there's some sort of heretofor unknown, and undetectable, local field that's determining the correlations. Your suggestion seems as contrived as the nonlocal models -- as well as somewhat incoherent wrt what's already known (ie., wrt working models).

The field is the "hidden variable", together with particles' positions, so, in this sense, is not known. However, if this field determines particles' motions it has to appear, on a statistical level as the EM, weak, color field. The question is if such a field can be formulated. Unfortunately I cannot do it myself as I don't have the required skills but I wonder if it could be achieved, or it is mathematically impossible. Now, in the absence of a mathematical formulation it is premature to say that it must be contrived, or that the hypothesis is not falsifiable.

I still don't know what the distinguishing characteristics of a superdeterministic theory are. Can you give a general definition of superdeterminism that differentiates it from determinism? If not, then you're OP is just asking for (conclusive or definitive) arguments against the assumption of determinism. There aren't any. So, the assumption that the deep reality of Nature is deterministic remains the defacto standard assumption underlying all physical science. It isn't mentioned simply because it doesn't have to be. It's generally taken for granted -- not dismissed.

What I am interested is local determinism, as non-local determinism is clearly possible (Bohm's interpretation). There are many voices that say that it has been ruled out by EPR experiments. I am interested in seeing if any strong arguments can be put forward against the use of SD loophole (denying the statistical independence between emitter and detector) to reestablish local deterministic theories as a reasonable research object.
 
  • #89
ueit said:
What I am interested is local determinism, as non-local determinism is clearly possible (Bohm's interpretation). There are many voices that say that it has been ruled out by EPR experiments. I am interested in seeing if any strong arguments can be put forward against the use of SD loophole (denying the statistical independence between emitter and detector) to reestablish local deterministic theories as a reasonable research object.

ueit,

Statistics are calculated mathematically from individual measurements. They are aggregate observations about likelihoods. Determinism deals with absolute rational causation. It would be an inductive fallacy to say that statistics can tell us anything about the basic mechanisms of natural causation.

Linguistically, we may use statistics as 2nd order explanations in statements of cause and effect, but it is understood that statistical explanations never represent true causation. If determinism exists, there must necessarily be some independent, sufficient, underlying cause - some mechanism of causation.

Because of the problem of induction, no irreducible superdeterministic explanation can prove anything about the first order causes and effects that basic physics is concerned with.

The very concept of locality necessarily implies classical, billiard-ball style, momentum transfer causation. The experiments of quantum mechanics have conclusively falsified this model.
 
Last edited:
  • #90
ThomasT said:
Not disproven, but supplanted by qm wrt certain applications. Classical physics is used in a wide variety of applications. Sometimes, in semi-classical accounts, part of a system is treated classically and the other part quantum mechanically.

ThomasT, I can treat my sister as if she's my aunt. That doesn't make it true :). Local causation stemming from real classical particles and waves has been falsified by experiments. EPRB type experiments are particularly illustrative of this fact.

Yes, it can't be proven. Just reinforced by observations. Our windows on the underlying reality of our universe are small and very foggy. However, what is known suggests that the deep reality is deterministic and locally causal.

If there is evidence of deep reality being deterministic, I would like to know what it is :). As for the universe being locally deterministic, this has been proven impossible. See above.

Induction is justified by its continued practical utility. A general understanding for why induction works at all begins with assumption of determinism.

So we're supporting determinism by assuming determinism?

As you indicated earlier, we can't actually look at the quantum level, but the assumption of determinism is kept because there is evidence that there are quantum-scale mechanisms for physical causation.

If the evidence is inductive, then since you claim induction relies on an assumption of determinism itself, there is no evidence at all. I'm not denying the idea that there could be evidence for basic determinism, but the only evidence I've seen proposed here so far has been ethical. It has been assumptions about what we should believe and what's practical, rather than what we can know or what's true.
 
Last edited:
  • #91
ThomasT said:
A higher order explanation isn't necessarily superfluous, though it might, in a certain sense, be considered as such if there exists a viable lower order explanation for the same phenomenon.

ThomasT,

If there is no viable lower order explanation then by definition you aren't dealing with a higher order explanation. Higher order explanations, as such, are not necessary, and unless they are reducible to first order explanations, they cannot be sufficient either.

Basically, they aren't true causes (or explanations) at all.
 
  • #92
ueit said:
What I am interested is local determinism, as non-local determinism is clearly possible (Bohm's interpretation). There are many voices that say that it has been ruled out by EPR experiments.
Lhv formalisms of quantum entangled states are ruled out -- not the possible existence of lhv's. As things stand now, there's no conclusive argument for either locality or nonlocality in Nature. But the available physical evidence suggests that Nature behaves deterministically according to the principle of local causation.

ueit said:
I am interested in seeing if any strong arguments can be put forward against the use of SD loophole (denying the statistical independence between emitter and detector) to reestablish local deterministic theories as a reasonable research object.
You've already agreed that the method of changing the polarizer settings, as well as whether or not they're changed while the emissions are in flight, incident on the polarizers, is irrelevant to the rate of joint detection.

The reason that Bell inequalities are violated has to do with the formal requirements due to the assumption of locality. This formal requirement also entails statistical independence of the accumulated data sets at A and B. But entanglement experiments are designed and executed to produce statistical dependence vis the pairing process.

There's no way around this unless you devise a model that can actually predict individual detections.

Or you could reason your way around the difficulty by noticing that the hidden variable (ie., the specific quality of the emission that might cause enough of it to be transmitted by the polarizer to register a detection) is irrelevant wrt the rate of joint detection (the only thing that matters wrt joint detection is the relationship, presumably produced via simultaneous emission, between the two opposite-moving disturbances) . Thus preserving the idea that the correlations are due to local interactions/transmissions, while at the same time modelling the joint state in a nonseparable form. Of course, then you wouldn't have an explicitly local, explicitly hidden variable model, but rather something along the lines of standard qm.
 
Last edited:
  • #93
ThomasT said:
Yes, of course, the presumed existence of hidden variables comes with the assumption of determinism. I don't think anybody would deny that hidden variables exist.

Well, if you assume D then you must be including local hidden variables. Therefore you're rejecting both Bell's Theorem and the Heisenberg uncertainty principle. Moreover, I guess quantum fluctuations at the Planck scale could not be random either.

My reductio ad absurdum argument was based on thermodynamics which at the theoretical level is based on probabilities. If a system can only exist in one possible state and only transit into one other possible state, there is no Markov process. All states exist with p=1 or p=0. (past. present, and future). Under D, probabilities can only reflect our uncertainty. If you plug 0 or 1 into the Gibbs equation, you get positive infinity or 0. Any values in between (under D) are merely reflections of our uncertainty. Yet we can actually measure finite non zero values of entropy in experiments (defined as Q/T or heat/temp). Such results cannot be only reflections of our uncertainty. Remember, there is no statistical independence under D.

None of this either proves or disproves D. I don't think it can be done. It seems to be essentially a metaphysical issue. However, it seems to me (I'm not a physicist) like you have give up a lot to assume D at all scales.
 
Last edited:
  • #94
SW VandeCarr said:
Well, if you assume D then you must be including local hidden variables. Therefore you're rejecting both Bell's Theorem and the Heisenberg uncertainty principle. Moreover, I guess quantum fluctuations at the Planck scale could not be random either.
Yes, I'm including local hidden variables. Bell's analysis has to do with the formal requirements of lhv models of entangled states, not with what might or might not exist in an underlying quantum reality. The HUP has to do with the relationship between measurements on canonically conjugate variables. The product of the statistical spreads is equal to or greater than Planck's constant. Quantum fluctuations come from an application of the HUP. None of this tells us whether or not there is an underlying quantum reality. I would suppose that most everybody believes there is. It also doesn't tell us whether Nature is local or nonlocal. So, the standard assumption is that it's local.


SW VandeCarr said:
None of this either proves or disproves D. I don't think it can be done.
I agree.

SW VandeCarr said:
It seems to be essentially a metaphysical issue.
I suppose so, but not entirely insofar as metaphysical constructions can be evaluated wrt our observations of Nature. And I don't think that one has to give up anything that's accepted as standard mainstream physical science to believe in a locally deterministic underlying reality.
 
  • #95
kote said:
Local causation stemming from real classical particles and waves has been falsified by experiments. EPRB type experiments are particularly illustrative of this fact.
These are formal issues. Not matters of fact about what is or isn't true about an underlying reality.

kote said:
If there is evidence of deep reality being deterministic, I would like to know what it is :).
It's all around you. Order and predictability is the rule in physical science, not the exception. The deterministic nature of things is apparent on many levels, even wrt quantum experimental phenomena. Some things are impossible to predict, but, in general, things are not observed to happen independently of antecedent events. The most recent past (the present) is only slightly different from 1 second before. Take a movie of any physical process that you can visually track and look at it frame by frame.

There isn't any compelling reason to believe that there aren't any fundamental deterministic dynamics governing the evolution of our universe, or that the dynamics of waves in media is essentially different wrt any scale of behavior. In fact, quantum theory incorporates lots of classical concepts and analogs.

kote said:
As for the universe being locally deterministic, this has been proven impossible.
This is just wrong. Where did you get this from?

Anyway, maybe you should start a new thread here in the philosophy forum on induction and/or determinism. I wouldn't mind discussing it further, but I don't think we're helping ueit wrt the thread topic.
 
  • #96
=ThomasT;I suppose so, but not entirely insofar as metaphysical constructions can be evaluated wrt our observations of Nature. And I don't think that one has to give up anything that's accepted as standard mainstream physical science to believe in a locally deterministic underlying reality.

I think we may have to give up more if we want D. You didn't address my thermodynamic argument. Entropy is indeed a measure of our uncertainty regarding the state of a system. We already agreed that our uncertainty has nothing to do with nature. Yet how is it that we can measure entropy as the relation Q/T? The following shows how we can derive the direct measure of entropy from first principles (courtesy of Count Iblis):

http://en.wikipedia.org/wiki/Fundamental_thermodynamic_relation#Derivation_from_first_principlesThe assumption behind this derivation is that the position and momentum of each of N particles (atoms or molecules) in a gas are uncorrelated (statistically independent). However D doesn't allow for statistical independence. Under D the position and momentum of each particle at any point in time is predetermined. Therefore there is full correlation of the momenta of all the particles.

What is actually happening when the experimenter heats the gas and observes a change in the Q/T relation (entropy increases)? Under D the whole experiment is a predetermined scenario with the actions of the experimenter included. The experimenter didn't decide to heat the gas or even set up the experiment. The experimenter had no choice. She or he is an actor following the deterministic script. Everything is correlated with everything else with measure one. There really is no cause and effect. There is only a the predetermined succession of states. Therefore you're going to have to give up the usual (strong) form of causality where we can perform experimental interventions to test causality (if you want D).

Causality is not defined in mathematics or logic. It's usually defined operationally where, given A is the necessary, sufficient and sole cause of B, if you remove A, then B cannot occur. Well under D we cannot remove A unless it was predetermined that p(B)=0. At best, we can have a weak causality where we observe a succession of states that are inevitable.
 
Last edited:
  • #97
kote said:
ueit,

Statistics are calculated mathematically from individual measurements. They are aggregate observations about likelihoods. Determinism deals with absolute rational causation. It would be an inductive fallacy to say that statistics can tell us anything about the basic mechanisms of natural causation.

There is no fallacy here. One may ask what deterministic models could fit the statistical data. If you are lucky you may falsify some of them and find the "true" one. There is no guarantee of success but there is no fallacy either.

Linguistically, we may use statistics as 2nd order explanations in statements of cause and effect, but it is understood that statistical explanations never represent true causation. If determinism exists, there must necessarily be some independent, sufficient, underlying cause - some mechanism of causation.

I don't understand the meaning of "independent" cause. Independent from what? Most probable, the "cause" is just the state of the universe in the past.

Because of the problem of induction, no irreducible superdeterministic explanation can prove anything about the first order causes and effects that basic physics is concerned with.

No absolute proof is possible in science and I do not see any problem with that. Finding a SD mechanism behind QM could lead to new physics and I find this interesting.

The very concept of locality necessarily implies classical, billiard-ball style, momentum transfer causation. The experiments of quantum mechanics have conclusively falsified this model.

This is false. General relativity or classical electrodynamics are local theories, yet they are not based on the billiard-ball concept but on fields.
 
  • #98
ThomasT said:
Lhv formalisms of quantum entangled states are ruled out -- not the possible existence of lhv's. As things stand now, there's no conclusive argument for either locality or nonlocality in Nature. But the available physical evidence suggests that Nature behaves deterministically according to the principle of local causation.

You've already agreed that the method of changing the polarizer settings, as well as whether or not they're changed while the emissions are in flight, incident on the polarizers, is irrelevant to the rate of joint detection.

The reason that Bell inequalities are violated has to do with the formal requirements due to the assumption of locality. This formal requirement also entails statistical independence of the accumulated data sets at A and B. But entanglement experiments are designed and executed to produce statistical dependence vis the pairing process.

There's no way around this unless you devise a model that can actually predict individual detections.

Or you could reason your way around the difficulty by noticing that the hidden variable (ie., the specific quality of the emission that might cause enough of it to be transmitted by the polarizer to register a detection) is irrelevant wrt the rate of joint detection (the only thing that matters wrt joint detection is the relationship, presumably produced via simultaneous emission, between the two opposite-moving disturbances) . Thus preserving the idea that the correlations are due to local interactions/transmissions, while at the same time modelling the joint state in a nonseparable form. Of course, then you wouldn't have an explicitly local, explicitly hidden variable model, but rather something along the lines of standard qm.

I think I should better explain what I think it does happen in an EPR experiment.

1. At source location, the field is a function of the detectors' state. Because the model is local this information is "old". If the detectors are at 1 ly away, then the source "knows" the detectors' state as it was 1 year in the past.

2. From this available information and the deterministic evolution law the source "computes" the future state of the detectors when the particles arrive there.

3. The actual spin of the particles is set at the moment of emission and does not change on flight.

4. The correlations are a direct result of the way the source "chooses" the spins of the entangled particles. It so happens that this "choice" follows Malus's law.

In conclusion, changing the detectors before detection has no relevance on the experimental results because these changes are taken into account when the source "decides" the particles' spin. Bell's inequality is based on the assumption that the hidden variable that determines the particle spin is not related to the way the detectors are positioned. The above model denies this. Both the position of the detector and the spin of the particle are a direct result of the past field configuration.
 
  • #99
SW VandeCarr said:
The assumption behind this derivation is that the position and momentum of each of N particles (atoms or molecules) in a gas are uncorrelated (statistically independent). However D doesn't allow for statistical independence. Under D the position and momentum of each particle at any point in time is predetermined. Therefore there is full correlation of the momenta of all the particles.

The trajectory of the particle is a function of the field produced by all other particles in the universe, therefore D does not require a strong correlation between the particles included in the experiment. Also I do not see the relevance of predetermination to the issue of statistical independence. The digits of Pi are strictly determined, yet no correlation exists between them. What you need for the entropy law to work is not absolute randomness but pseudorandomness.
 
  • #100
ueit said:
The trajectory of the particle is a function of the field produced by all other particles in the universe, therefore D does not require a strong correlation between the particles included in the experiment. Also I do not see the relevance of predetermination to the issue of statistical independence. The digits of Pi are strictly determined, yet no correlation exists between them. What you need for the entropy law to work is not absolute randomness but pseudorandomness.

Correlation is the degree of correspondence between two random variables. There are no random variables involved in the computation of pi.

Under D, probabilities only reflect our uncertainty. They have nothing to do with nature (as distinct from ourselves). Statistical independence is an assumption based on our uncertainty. Ten fair coin tosses are assumed to be statistically independent based on our uncertainty of the outcome. We imagine there are 1024 possible outcomes, Under D there is only one possible outcome and if we had perfect information we could know that outcome.

Under D, not only is the past invariant, but the future is also invariant. If we had perfect information the future would be as predictable as the past is "predictable". It's widely accepted that completed events have no information value (ie p=1) and that information only exists under conditions of our uncertainty.

I agree that with pseudorandomness the thermodynamic laws work, but only because of our uncertainty given we lack the perfect information which could be available (in principle) under D.

EDIT: When correlation (R^{2}) is unity, it is no longer probabilistic in that no particles move independently of any other. Under D all particle positions and momenta are predetermined. If a full description of particle/field states is in principle knowable in the past, it is knowable in future under D.
 
Last edited:
Back
Top