Arguments Against Superdeterminism

  • Thread starter Thread starter ueit
  • Start date Start date
  • Tags Tags
    Superdeterminism
Click For Summary
Superdeterminism (SD) challenges the statistical independence assumed in quantum mechanics, particularly in the context of Bell's Theorem, suggesting that all events, including human decisions, are predetermined. This theory is often dismissed in scientific discussions, with calls for clearer arguments against it. Critics argue that SD implies a lack of free will, raising questions about the origins of human creativity and technological advancements, such as cell phones and colliders. The conversation also touches on the philosophical implications of determinism, questioning the nature of existence and the illusion of self. Ultimately, the discussion highlights the need for a comprehensive theory that reconciles quantum and classical behaviors while addressing the implications of determinism.
  • #91
ThomasT said:
A higher order explanation isn't necessarily superfluous, though it might, in a certain sense, be considered as such if there exists a viable lower order explanation for the same phenomenon.

ThomasT,

If there is no viable lower order explanation then by definition you aren't dealing with a higher order explanation. Higher order explanations, as such, are not necessary, and unless they are reducible to first order explanations, they cannot be sufficient either.

Basically, they aren't true causes (or explanations) at all.
 
Physics news on Phys.org
  • #92
ueit said:
What I am interested is local determinism, as non-local determinism is clearly possible (Bohm's interpretation). There are many voices that say that it has been ruled out by EPR experiments.
Lhv formalisms of quantum entangled states are ruled out -- not the possible existence of lhv's. As things stand now, there's no conclusive argument for either locality or nonlocality in Nature. But the available physical evidence suggests that Nature behaves deterministically according to the principle of local causation.

ueit said:
I am interested in seeing if any strong arguments can be put forward against the use of SD loophole (denying the statistical independence between emitter and detector) to reestablish local deterministic theories as a reasonable research object.
You've already agreed that the method of changing the polarizer settings, as well as whether or not they're changed while the emissions are in flight, incident on the polarizers, is irrelevant to the rate of joint detection.

The reason that Bell inequalities are violated has to do with the formal requirements due to the assumption of locality. This formal requirement also entails statistical independence of the accumulated data sets at A and B. But entanglement experiments are designed and executed to produce statistical dependence vis the pairing process.

There's no way around this unless you devise a model that can actually predict individual detections.

Or you could reason your way around the difficulty by noticing that the hidden variable (ie., the specific quality of the emission that might cause enough of it to be transmitted by the polarizer to register a detection) is irrelevant wrt the rate of joint detection (the only thing that matters wrt joint detection is the relationship, presumably produced via simultaneous emission, between the two opposite-moving disturbances) . Thus preserving the idea that the correlations are due to local interactions/transmissions, while at the same time modelling the joint state in a nonseparable form. Of course, then you wouldn't have an explicitly local, explicitly hidden variable model, but rather something along the lines of standard qm.
 
Last edited:
  • #93
ThomasT said:
Yes, of course, the presumed existence of hidden variables comes with the assumption of determinism. I don't think anybody would deny that hidden variables exist.

Well, if you assume D then you must be including local hidden variables. Therefore you're rejecting both Bell's Theorem and the Heisenberg uncertainty principle. Moreover, I guess quantum fluctuations at the Planck scale could not be random either.

My reductio ad absurdum argument was based on thermodynamics which at the theoretical level is based on probabilities. If a system can only exist in one possible state and only transit into one other possible state, there is no Markov process. All states exist with p=1 or p=0. (past. present, and future). Under D, probabilities can only reflect our uncertainty. If you plug 0 or 1 into the Gibbs equation, you get positive infinity or 0. Any values in between (under D) are merely reflections of our uncertainty. Yet we can actually measure finite non zero values of entropy in experiments (defined as Q/T or heat/temp). Such results cannot be only reflections of our uncertainty. Remember, there is no statistical independence under D.

None of this either proves or disproves D. I don't think it can be done. It seems to be essentially a metaphysical issue. However, it seems to me (I'm not a physicist) like you have give up a lot to assume D at all scales.
 
Last edited:
  • #94
SW VandeCarr said:
Well, if you assume D then you must be including local hidden variables. Therefore you're rejecting both Bell's Theorem and the Heisenberg uncertainty principle. Moreover, I guess quantum fluctuations at the Planck scale could not be random either.
Yes, I'm including local hidden variables. Bell's analysis has to do with the formal requirements of lhv models of entangled states, not with what might or might not exist in an underlying quantum reality. The HUP has to do with the relationship between measurements on canonically conjugate variables. The product of the statistical spreads is equal to or greater than Planck's constant. Quantum fluctuations come from an application of the HUP. None of this tells us whether or not there is an underlying quantum reality. I would suppose that most everybody believes there is. It also doesn't tell us whether Nature is local or nonlocal. So, the standard assumption is that it's local.


SW VandeCarr said:
None of this either proves or disproves D. I don't think it can be done.
I agree.

SW VandeCarr said:
It seems to be essentially a metaphysical issue.
I suppose so, but not entirely insofar as metaphysical constructions can be evaluated wrt our observations of Nature. And I don't think that one has to give up anything that's accepted as standard mainstream physical science to believe in a locally deterministic underlying reality.
 
  • #95
kote said:
Local causation stemming from real classical particles and waves has been falsified by experiments. EPRB type experiments are particularly illustrative of this fact.
These are formal issues. Not matters of fact about what is or isn't true about an underlying reality.

kote said:
If there is evidence of deep reality being deterministic, I would like to know what it is :).
It's all around you. Order and predictability is the rule in physical science, not the exception. The deterministic nature of things is apparent on many levels, even wrt quantum experimental phenomena. Some things are impossible to predict, but, in general, things are not observed to happen independently of antecedent events. The most recent past (the present) is only slightly different from 1 second before. Take a movie of any physical process that you can visually track and look at it frame by frame.

There isn't any compelling reason to believe that there aren't any fundamental deterministic dynamics governing the evolution of our universe, or that the dynamics of waves in media is essentially different wrt any scale of behavior. In fact, quantum theory incorporates lots of classical concepts and analogs.

kote said:
As for the universe being locally deterministic, this has been proven impossible.
This is just wrong. Where did you get this from?

Anyway, maybe you should start a new thread here in the philosophy forum on induction and/or determinism. I wouldn't mind discussing it further, but I don't think we're helping ueit wrt the thread topic.
 
  • #96
=ThomasT;I suppose so, but not entirely insofar as metaphysical constructions can be evaluated wrt our observations of Nature. And I don't think that one has to give up anything that's accepted as standard mainstream physical science to believe in a locally deterministic underlying reality.

I think we may have to give up more if we want D. You didn't address my thermodynamic argument. Entropy is indeed a measure of our uncertainty regarding the state of a system. We already agreed that our uncertainty has nothing to do with nature. Yet how is it that we can measure entropy as the relation Q/T? The following shows how we can derive the direct measure of entropy from first principles (courtesy of Count Iblis):

http://en.wikipedia.org/wiki/Fundamental_thermodynamic_relation#Derivation_from_first_principlesThe assumption behind this derivation is that the position and momentum of each of N particles (atoms or molecules) in a gas are uncorrelated (statistically independent). However D doesn't allow for statistical independence. Under D the position and momentum of each particle at any point in time is predetermined. Therefore there is full correlation of the momenta of all the particles.

What is actually happening when the experimenter heats the gas and observes a change in the Q/T relation (entropy increases)? Under D the whole experiment is a predetermined scenario with the actions of the experimenter included. The experimenter didn't decide to heat the gas or even set up the experiment. The experimenter had no choice. She or he is an actor following the deterministic script. Everything is correlated with everything else with measure one. There really is no cause and effect. There is only a the predetermined succession of states. Therefore you're going to have to give up the usual (strong) form of causality where we can perform experimental interventions to test causality (if you want D).

Causality is not defined in mathematics or logic. It's usually defined operationally where, given A is the necessary, sufficient and sole cause of B, if you remove A, then B cannot occur. Well under D we cannot remove A unless it was predetermined that p(B)=0. At best, we can have a weak causality where we observe a succession of states that are inevitable.
 
Last edited:
  • #97
kote said:
ueit,

Statistics are calculated mathematically from individual measurements. They are aggregate observations about likelihoods. Determinism deals with absolute rational causation. It would be an inductive fallacy to say that statistics can tell us anything about the basic mechanisms of natural causation.

There is no fallacy here. One may ask what deterministic models could fit the statistical data. If you are lucky you may falsify some of them and find the "true" one. There is no guarantee of success but there is no fallacy either.

Linguistically, we may use statistics as 2nd order explanations in statements of cause and effect, but it is understood that statistical explanations never represent true causation. If determinism exists, there must necessarily be some independent, sufficient, underlying cause - some mechanism of causation.

I don't understand the meaning of "independent" cause. Independent from what? Most probable, the "cause" is just the state of the universe in the past.

Because of the problem of induction, no irreducible superdeterministic explanation can prove anything about the first order causes and effects that basic physics is concerned with.

No absolute proof is possible in science and I do not see any problem with that. Finding a SD mechanism behind QM could lead to new physics and I find this interesting.

The very concept of locality necessarily implies classical, billiard-ball style, momentum transfer causation. The experiments of quantum mechanics have conclusively falsified this model.

This is false. General relativity or classical electrodynamics are local theories, yet they are not based on the billiard-ball concept but on fields.
 
  • #98
ThomasT said:
Lhv formalisms of quantum entangled states are ruled out -- not the possible existence of lhv's. As things stand now, there's no conclusive argument for either locality or nonlocality in Nature. But the available physical evidence suggests that Nature behaves deterministically according to the principle of local causation.

You've already agreed that the method of changing the polarizer settings, as well as whether or not they're changed while the emissions are in flight, incident on the polarizers, is irrelevant to the rate of joint detection.

The reason that Bell inequalities are violated has to do with the formal requirements due to the assumption of locality. This formal requirement also entails statistical independence of the accumulated data sets at A and B. But entanglement experiments are designed and executed to produce statistical dependence vis the pairing process.

There's no way around this unless you devise a model that can actually predict individual detections.

Or you could reason your way around the difficulty by noticing that the hidden variable (ie., the specific quality of the emission that might cause enough of it to be transmitted by the polarizer to register a detection) is irrelevant wrt the rate of joint detection (the only thing that matters wrt joint detection is the relationship, presumably produced via simultaneous emission, between the two opposite-moving disturbances) . Thus preserving the idea that the correlations are due to local interactions/transmissions, while at the same time modelling the joint state in a nonseparable form. Of course, then you wouldn't have an explicitly local, explicitly hidden variable model, but rather something along the lines of standard qm.

I think I should better explain what I think it does happen in an EPR experiment.

1. At source location, the field is a function of the detectors' state. Because the model is local this information is "old". If the detectors are at 1 ly away, then the source "knows" the detectors' state as it was 1 year in the past.

2. From this available information and the deterministic evolution law the source "computes" the future state of the detectors when the particles arrive there.

3. The actual spin of the particles is set at the moment of emission and does not change on flight.

4. The correlations are a direct result of the way the source "chooses" the spins of the entangled particles. It so happens that this "choice" follows Malus's law.

In conclusion, changing the detectors before detection has no relevance on the experimental results because these changes are taken into account when the source "decides" the particles' spin. Bell's inequality is based on the assumption that the hidden variable that determines the particle spin is not related to the way the detectors are positioned. The above model denies this. Both the position of the detector and the spin of the particle are a direct result of the past field configuration.
 
  • #99
SW VandeCarr said:
The assumption behind this derivation is that the position and momentum of each of N particles (atoms or molecules) in a gas are uncorrelated (statistically independent). However D doesn't allow for statistical independence. Under D the position and momentum of each particle at any point in time is predetermined. Therefore there is full correlation of the momenta of all the particles.

The trajectory of the particle is a function of the field produced by all other particles in the universe, therefore D does not require a strong correlation between the particles included in the experiment. Also I do not see the relevance of predetermination to the issue of statistical independence. The digits of Pi are strictly determined, yet no correlation exists between them. What you need for the entropy law to work is not absolute randomness but pseudorandomness.
 
  • #100
ueit said:
The trajectory of the particle is a function of the field produced by all other particles in the universe, therefore D does not require a strong correlation between the particles included in the experiment. Also I do not see the relevance of predetermination to the issue of statistical independence. The digits of Pi are strictly determined, yet no correlation exists between them. What you need for the entropy law to work is not absolute randomness but pseudorandomness.

Correlation is the degree of correspondence between two random variables. There are no random variables involved in the computation of pi.

Under D, probabilities only reflect our uncertainty. They have nothing to do with nature (as distinct from ourselves). Statistical independence is an assumption based on our uncertainty. Ten fair coin tosses are assumed to be statistically independent based on our uncertainty of the outcome. We imagine there are 1024 possible outcomes, Under D there is only one possible outcome and if we had perfect information we could know that outcome.

Under D, not only is the past invariant, but the future is also invariant. If we had perfect information the future would be as predictable as the past is "predictable". It's widely accepted that completed events have no information value (ie p=1) and that information only exists under conditions of our uncertainty.

I agree that with pseudorandomness the thermodynamic laws work, but only because of our uncertainty given we lack the perfect information which could be available (in principle) under D.

EDIT: When correlation (R^{2}) is unity, it is no longer probabilistic in that no particles move independently of any other. Under D all particle positions and momenta are predetermined. If a full description of particle/field states is in principle knowable in the past, it is knowable in future under D.
 
Last edited:
  • #101
SW VandeCarr said:
What is actually happening when the experimenter heats the gas and observes a change in the Q/T relation (entropy increases)? Under D the whole experiment is a predetermined scenario with the actions of the experimenter included. The experimenter didn't decide to heat the gas or even set up the experiment. The experimenter had no choice. She or he is an actor following the deterministic script. Everything is correlated with everything else with measure one. There really is no cause and effect. There is only a the predetermined succession of states. Therefore you're going to have to give up the usual (strong) form of causality where we can perform experimental interventions to test causality (if you want D).

Causality is not defined in mathematics or logic. It's usually defined operationally where, given A is the necessary, sufficient and sole cause of B, if you remove A, then B cannot occur. Well under D we cannot remove A unless it was predetermined that p(B)=0. At best, we can have a weak causality where we observe a succession of states that are inevitable.
The assumption of determinism and the application of probabilities are independent considerations.

I wouldn't separate causality into strong and weak types. We observe invariant relationships, or predictable event chains, or, as you say, "a succession of states that are inevitable". Cause and effect are evident at the macroscopic scale.

Determinism is the assumption that there are fundamental dynamical rules governing the evolution of any physical state or spatial configuration. We already agreed that it can't be disproven.

The distinguishing characteristic of ueit's proposal isn't that it's deterministic. What sets it apart is that it involves an infinite field of nondiminishing strength centered on polarizer or other filtration/detection devices and/or device combinations and propagating info at c to emission devices thereby determining the time and type of emission, etc., etc. So far, it doesn't make much sense to me.

We already have a way of looking at these experiments which allows for an implicit, if not explicit, local causal view.

Anyway, his main question about arguments against the assumption of determinism has been answered, and I thought we agreed on this -- there aren't any good ones.
 
Last edited:
  • #102
ThomasT said:
Anyway, his main question about arguments against the assumption of determinism has been answered, and I thought we agreed on this -- there aren't any good ones.

Of course you can't disprove or really even argue against metaphysical assumptions (except with other metaphysical assumptions). Nature appears effectively deterministic at macro-scales if we disregard human intervention and human activity in general. At quantum scales, it remains to be proven that hidden variables exist. (Afaik, there is no real evidence for hidden variables).Therefore strict (as opposed to effective) determinism remains a matter of taste. In any case, to the extent that science uses probabilistic reasoning, science is not based de facto on strict determinism. Thermodynamics is based almost entirely on probabilistic reasoning. Quantum mechanics is deterministic only insofar as probabilities are determined and confirmed by experiment.

(Note: I'm using "effective determinism" in terms of what we actually observe within the limits of measurement, and "strict determinism" as a philosophical paradigm.)
 
Last edited:
  • #103
SW VandeCarr said:
At quantum scales, it remains to be proven that hidden variables exist. (Afaik, there is no real evidence for hidden variables).
I think everybody should believe that hidden variables exist, ie., that there are deeper levels of reality than our sensory faculties reveal to us. The evidence is electrical, magnetic, gravitational, etc., phenomena.

Whether local hidden variable mathematical formulations of certain experimental preparations are possible is another thing altogether. This was addressed by Bell.

Ueit is interested in lhv models. Bell says we're not going to have them for quantum entangled states, and so far nobody has found a way around his argument.
 
  • #104
ThomasT said:
Anyway, his main question about arguments against the assumption of determinism has been answered, and I thought we agreed on this -- there aren't any good ones.


That's right, but how is it different to the idea that we are living in the Matrix and if no good arguments can be put forward against it, is that a model of the universe that you think should even be considered by science?
 
  • #105
WaveJumper said:
That's right, but how is it different to the idea that we are living in the Matrix and if no good arguments can be put forward against it, is that a model of the universe that you think should even be considered by science?
One difference is that there are some good reasons to believe in determinism. It seems that our universe is evolving in a somewhat predictable way. There are many particular examples of deterministic evolution on various scales. This suggests some pervading fundamental dynamic(s). So, physics makes that assumption.

We might be in some sort of Matrix. But there's no particular reason to think that we are. The question is, does our shared, objective reality seem more deterministic the more we learn about it?
 
  • #106
ThomasT said:
The question is, does our shared, objective reality seem more deterministic the more we learn about it?

Well I think if you're looking at it that way the answer is very clearly "no" :smile:. QM had to go screw things up with the Copenhagen Interpretation giving up on deterministic objective reality completely. No one questioned it with Newton.

Since you mentioned it, it looks more and more like the world could be discrete, suggesting a structure with limits on its basic numerical accuracy - very Matrix-like.

I don't think science can tell us anything about the issue though. Hume covered that pretty well in my opinion. I do think the assumption of determinism is a rational extension of logic that needs to be made for the world to be intelligible for us.
 
  • #107
kote said:
Well I think if you're looking at it that way the answer is very clearly "no" :smile:. QM had to go screw things up with the Copenhagen Interpretation giving up on deterministic objective reality completely.
The CI tells us that the quantum of action and the requirements for objective communication place limits on what we can say about Nature. This has nothing to do with the assumption of determinism, which is a rational extension of what we do observe wrt the evolution of systems on various scales.

kote said:
Since you mentioned it, it looks more and more like the world could be discrete, suggesting a structure with limits on its basic numerical accuracy - very Matrix-like.
The more I learn, the more it seems to me that the world is a fundamentally seamless complex of interacting waveforms in a hierarchy of media. The metaphysical extension of quantization isn't discreteness per se, but rather resonances, harmonics, etc.

But maybe I don't understand what you're getting at here.

kote said:
I don't think science can tell us anything about the issue though. Hume covered that pretty well in my opinion. I do think the assumption of determinism is a rational extension of logic that needs to be made for the world to be intelligible for us.
Science is how we most objectively observe the world and least ambiguously communicate those observations. It wouldn't make much sense for us to talk about the world in any way other than how it seems to us to be evolving -- which is deterministically.
 
  • #108
ueit said:
1. At source location, the field is a function of the detectors' state. Because the model is local this information is "old". If the detectors are at 1 ly away, then the source "knows" the detectors' state as it was 1 year in the past.
Ok, let's say the setup is A <1ly> E <1ly> B. The emission part of a series of runs begins and ends before the filters/detectors at A and B are even built. After all of the emissions that might possibly be detected in the experiment have been in transit for, say, 10 months, then the experimenters at A and B build their ends and put the stuff in place.

If they've set things up correctly, then when the data sets at A and B are properly paired and correlated with the appropriate angular differences, then you'll see something closely approximating a cos^2 Theta dependency (Malus Law).

But the filters'/detectors' state couldn't have had anything to do with the emission values because the filters/detectors didn't even exist until all of the emissions were already more than 3/4 of the way to the filters/detectors.

ueit said:
2. From this available information and the deterministic evolution law the source "computes" the future state of the detectors when the particles arrive there.
But, in the above scenario, the source couldn't have the necessary information, even nonlocally, because there were no filters/detectors to generate a field until long after all of the emissions originated.

Yet the joint results would approximate a Malus Law dependency between angular difference and rate of coincidental detection.
 
  • #109
ThomasT said:
Ok, let's say the setup is A <1ly> E <1ly> B. The emission part of a series of runs begins and ends before the filters/detectors at A and B are even built. After all of the emissions that might possibly be detected in the experiment have been in transit for, say, 10 months, then the experimenters at A and B build their ends and put the stuff in place.

If they've set things up correctly, then when the data sets at A and B are properly paired and correlated with the appropriate angular differences, then you'll see something closely approximating a cos^2 Theta dependency (Malus Law).

But the filters'/detectors' state couldn't have had anything to do with the emission values because the filters/detectors didn't even exist until all of the emissions were already more than 3/4 of the way to the filters/detectors.

The particles that the detectors are made of existed in one form or another since the big-bang as energy conservation precludes one to bring "new" matter into existence. The field that carries the information about the detectors is centered around each particle of the detector, and also exists since big-bang. This information transfer takes place at the level of fundamental particles not only when an object takes the macroscopic form of a detector.

But, in the above scenario, the source couldn't have the necessary information, even nonlocally, because there were no filters/detectors to generate a field until long after all of the emissions originated.

Yet the joint results would approximate a Malus Law dependency between angular difference and rate of coincidental detection.

See above.
 
  • #110
DrChinese said:
I pointed out that a) contradicts your hypothesis. So clearly SD is outside of what we know. That makes it 100% as speculative as the existence of God, so where is the science in any of this?

All our macroscopic evidence provides us an apparently deterministic view of the world. Current theory asks us to consider this as a mere coincidence that is not always true...but SD is an alternative that reconciles both the quantum experiments as well as our notion that the world is apparently deterministic. Therefore it has more evidence imo (albeit an inconclusive amount)..
 
Last edited:
  • #111
ueit said:
The particles that the detectors are made of existed in one form or another since the big-bang as energy conservation precludes one to bring "new" matter into existence. The field that carries the information about the detectors is centered around each particle of the detector, and also exists since big-bang. This information transfer takes place at the level of fundamental particles not only when an object takes the macroscopic form of a detector.
This isn't testable. It amounts to saying that god did it. We agreed that superdeterminism is synonymous with determinism, and that there's good reason to assume that Nature is fundamentally deterministic and that it obeys the principle of locality. But this doesn't inform us about the specific mechanisms wrt which processes evolve.

It's been shown that lhv formulations are incompatible with quantum entangled states, and that this doesn't imply that nonlocality is a fact of nature -- but only that the formal requirements rule out an explicitly local realistic account.
 
  • #113
ThomasT said:
This isn't testable. It amounts to saying that god did it.

Once a mathematical formulation for such a field is found (if it exists) we can say if the theory is testable or not. On the other hand this is no different from classical fields. They have existed since the big-bang. If not, please tell me when a certain object acquired mass and started to feel the gravitational field?

It's been shown that lhv formulations are incompatible with quantum entangled states

Where?
 
  • #114
ueit said:
Once a mathematical formulation for such a field is found (if it exists) we can say if the theory is testable or not. On the other hand this is no different from classical fields. They have existed since the big-bang. If not, please tell me when a certain object acquired mass and started to feel the gravitational field?
What field? What theory? You say that emission is a function of filter/detector settings. But it obviously isn't, so then you say that this ability of the filter/detector to precipitate emission exists in the ethereal field or the particles that will eventually become the filter/detector. So, I ask you, what's wrong with this?

You're talking about a field that doesn't exist, surrounding objects that don't exist, affecting a process from which they're spacelike separated vis local transmissions/interactions. This isn't good spitballing.

We've already discussed that there are reasons to believe that quantum entanglement is due, exclusively, to local transmissions/interactions. This is what I believe.

However, the problem, if one absolutely must have an explicitly local realist description of entanglement, is in finding a way to express locality in a way that's formally compatible with the required nonseparability (nonfactorability) -- which is due to the required statistical dependence between the separately accumulated data sets, produced by the data pairing process -- of entangled states
 
  • #115
ThomasT said:
What field? What theory? You say that emission is a function of filter/detector settings. But it obviously isn't, so then you say that this ability of the filter/detector to precipitate emission exists in the ethereal field or the particles that will eventually become the filter/detector. So, I ask you, what's wrong with this?

You're talking about a field that doesn't exist, surrounding objects that don't exist, affecting a process from which they're spacelike separated vis local transmissions/interactions. This isn't good spitballing.

Let's just assume that the field is the classical EM field. We have a electron source and two detectors. From now on, forget their macroscopic appearance, think of them as large groups of quarks and electrons, quark-electron "galaxies", if you want. Now calculate the resultant field (coming from each particle in the three "galaxies") around the location of the source. At certain times the force acting on an electron becomes large enough that the electron starts to move. You select then only those electrons that will be "captured" by the detector-galaxies. This is the type of theory I propose. Tell me what do you think it is wrong about it.
 
  • #116
ueit said:
Let's just assume that the field is the classical EM field. We have a electron source and two detectors. From now on, forget their macroscopic appearance, think of them as large groups of quarks and electrons, quark-electron "galaxies", if you want. Now calculate the resultant field (coming from each particle in the three "galaxies") around the location of the source. At certain times the force acting on an electron becomes large enough that the electron starts to move. You select then only those electrons that will be "captured" by the detector-galaxies. This is the type of theory I propose. Tell me what do you think it is wrong about it.
In our latest example, there was only one "galaxy", the emitter, wrt which some sort of emission producing field effect could be associated. There could be no no "quark-electron galaxies" associated with filters/detectors at the time of emission, because there were no filters/detectors at the time of emission.

I can almost envision what you're saying. But it's much to vague to be of any use. And anyway, imho, Nature doesn't work that way.

I think the evidence is pretty compelling that the assumption of parameter independence is a good one. The inequalities are being violated because the assumption of outcome independence is necessarily contradicted by the design and execution of Bell tests. So, what you need if you want a viable realistic description that is explicitly local, is a formal locality condition that doesn't include outcome independence.
 
  • #117
So if we roll back the universe 13.7 billion years, does everyone believe that if we have the same 'configuration' of energy states in the singularity, 13.7 billion years later we'd have the absolutely same universe where i would again be typing this post?
 
Last edited:
  • #118
I do.
 
  • #119
Blenton said:
I do.


One weird consequence of such a belief is that the single source of all deterministic reality obviously wants to fool us about its true nature, by implanting in the deterministic sequence of events conflicting concepts of false gods - Jesus, Mohammed, Buddha, etc., the 6-day creation, the 6000 year old Earth, etc. Essentially, according to the deterministic model of the universe, god is not very different from a Nigerian scammer(the fraudulent emails being manifested by holy books).
 
Last edited:
  • #120
WaveJumper said:
One weird consequence of such a belief is that the single source of all deterministic reality obviously wants to fool us about its true nature, by implanting in the deterministic sequence of events conflicting concepts of false gods - Jesus, Mohammed, Buddha, etc., the 6-day creation, the 6000 year old Earth, etc. Essentially, according to the deterministic model of the universe, god is not very different from a Nigerian scammer(the fraudulent emails being manifested by holy books).

I think if there is any kind of consistent logic in the Universe, then 'yes' I believe if re-run and the conditions and configurations were identical, the Universe would lead me to this very spot, sitting here typing. If not, then it seems obvious we live in a world without consistency and the various configurations in the Universe throughout time and their relationships and interactions with each other do not correlate to macro scale phenomena. If the outcome could be different, and let's say I did not exist as a result, or I was washing my car instead of typing at this moment, all of this points to a world where conditions, variables, configurations, and relationships do not consistently correlate with macro scale outcomes.
 

Similar threads

Replies
119
Views
3K
Replies
25
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 75 ·
3
Replies
75
Views
11K
  • · Replies 40 ·
2
Replies
40
Views
2K
  • · Replies 190 ·
7
Replies
190
Views
15K
  • · Replies 54 ·
2
Replies
54
Views
6K
Replies
58
Views
5K
  • · Replies 47 ·
2
Replies
47
Views
5K
  • · Replies 1 ·
Replies
1
Views
1K