Insights Superdeterminism and the Mermin Device

  • #51
PeterDonis said:
Consider the following experimental setup:

I have a source that produces two entangled photons at point A. The two photons go off in opposite directions to points B and C, where their polarizations are measured. Points B and C are each one light-minute away from point A.

At each polarization measurement, B and C, the angle of the polarization measurement is chosen 1 second before the photon arrives, based on random bits of information acquired from incoming light from a quasar roughly a billion light-years away that lies in the opposite direction from the photon source at A.

A rough diagram of the setup is below:

Quasar B -- (1B LY) -- B -- (1 LM) -- A -- (1 LM) -- C -- (1B LY) -- Quasar C

In this setup, any violation of statistical independence between the angles of the polarizations and the results of the individual measurements (not the correlations between the measurements, those will be as predicted by QM, but the statistics of each measurement taken separately) would have to be due to some kind of pre-existing correlation between the photon source at A and the distant quasars at both B and C. This is the sort of thing that superdeterminism has to claim must exist.
Exactly. On p. 3 of Superdeterminism: A Guide for the Perplexed, Sabine writes:
What does it mean to violate Statistical Independence? It means that fundamentally everything in the universe is connected with everything else, if subtly so.
 
Physics news on Phys.org
  • #52
PeterDonis said:
In this setup, any violation of statistical independence between the angles of the polarizations and the results of the individual measurements (...) would have to be due to some kind of pre-existing correlation between the photon source at A and the distant quasars at both B and C.
Can you prove this? Is this not similar to claiming that in the scenario occurring in controlled medical studies, any correlations would be due to correlations in the sources of randomness and the selected patient? You are basically postulating a mechanism for how the violation of statistical independence could have happened, and then point out that this mechanism would be implausible. Of course it is implausible, but the task would have been to prove that such an implausible mechanism is the only way how violation of statistical independence can arise.
 
  • #53
Lord Jestocost said:
Then the proponents of “superdeterminism” should tackle the radioactive decay.
I don't study SD, so I don't know what they're looking for exactly. Maybe they can't control sample preparation well enough for radioactive decay? Just guessing, because that seems like an obvious place to look.
 
  • #54
gentzen said:
Can you prove this?
Isn't it obvious? We are talking about a lack of statistical independence between a photon source at A and light sources in two quasars, each a billion light-years from A in opposite directions. How else could that possibly be except by some kind of pre-existing correlation?

gentzen said:
Is this not similar to claiming that in the scenario occurring in controlled medical studies, any correlations would be due to correlations in the sources of randomness and the selected patient?
Sort of. Correlations between the "sources of randomness" and the patient would be similar, yes. (And if we wanted to be extra, extra sure to eliminate such correlations, we could use light from quasars a billion light-years away as the source of randomness for medical experiments, just as was done in the scenario I described.) But that has nothing to do with the placebo effect, and is not what I understood you to be talking about before.
 
  • #55
Quantumental said:
I disagree. Copenhagen's "there are limits to what we can know about reality, quantum theory is the limit we can probe" is no different from "reality is made up of more than quantum theory" which SD implies. It's semantics.
Maybe. But with this notion of semantics the question is: what topic regarding interpretations and foundations of established physics isn't semantics?

Quantumental said:
As for MWI, yes, but by doing away with unique outcomes (at least in the most popular readings of Everettian QM) you literally state: "The theory only makes sense if you happen to be part of the wavefunction where the theory makes sense, but you are also part of maverick branches where QM is invalidated, thus nothing can really be validated as it were pre-QM". I would argue that this stance is equally radical in its departure from what we consider the scientific method to be.
I find it difficult to discuss this because there are problems with the role of probabilities in the MWI which may or may not be solved. For now, I would say that I don't see the difference between my experience of the past right now belonging to a maverick branch vs. a single world view where by pure chance, I and everybody else got an enormous amount of untypical data which lead us to the inference of wrong laws of physics.
 
  • #56
Nullstein said:
"Quantum nonlocality" is just another word for Bell-violating, which is of course accepted, because it has been demonstrated experimentally and nobody has denied that. The open question is the mechanism and superdeterminism is one way to address it.
Right, as Sabine points out in her video:
Hidden Variables + Locality + Statistical Independence = Obeys Bell inequality
Once you've committed to explaining entanglement with a mechanism (HV), you're stuck with violating at least one of Locality or Statistical Independence.
 
  • Like
Likes Lord Jestocost and DrChinese
  • #57
gentzen said:
I also tried to guess what they actually meant, namely that if there is nothing you can do to check whether your skepticism was justified, then it just stifles scientific progress.
Indeed, I would say it violates the principles of sound inference which is presumably the foundations of the scientific process. Without evidence, it seems the natural and sound assumption is to not assume dependence. To assume and invest representing existence of an unknown causal mechanism, seems deeply irrational to me. It is similar to the fallacy one comitts to when postulating and ad hoc microstructure with an ergodic hypothesis. Extrinsic ad hoc elements corrupt the natural inference process, and thus explanatory value. Superdeterminism is IMO such an extrinsic ad hoc element. I wouldn't say it's "wrong", I just find it an irrational thought from the perspective of learning, and the scientific progress is about learning - but in a "rational way".

/Fredrik
 
  • #58
Demystifier said:
How does non-locality require fine tuning?
In order to reproduce QM using non-locality, you don't just need non-locality. You need to tune the equations in such a way that they obey the no communication property, i.e. it must be impossible to use the non-local effects to transmit information. That's very much akin to the no free will requirement in superdeterminism. As can be seen in this paper, such a non-local explanation of entanglement requires even more fine tuning than superdeterminism.
Demystifier said:
How is non-locality anthropocentric?
Because communication is an anthropocentric notion. If the theory needs to be fine tuned to make communication impossible, this is thus a very anthropocentric requirement.
 
  • Skeptical
  • Like
Likes PeroK and gentzen
  • #59
RUTA said:
Once you've committed to explaining entanglement with a mechanism (HV), you're stuck with violating at least one of Locality or Statistical Independence.
If you insist that by mechanism, we mean an explanation in terms of hidden variables. I rather except that we need to refine our notion of what constitutes a mechanism. Before Einstein, people already knew the Lorentz transformations but they were stuck believing in an ether explanation. Even today, there are some exots left who can't wrap their heads around the modern interpretation. I think with quantum theory, we are in a quite similar situation. We know the correct formulas, but we are stuck in classical thinking. I guess we need another Einstein to sort this out.
 
  • Like
Likes Fra and martinbn
  • #60
Nullstein said:
You need to tune the equations in such a way that they obey the no communication property, i.e. it must be impossible to use the non-local effects to transmit information.
For me, it is like a 19th century critique of atomic explanation of thermodynamics based on the idea that it requires fine tuning of the rules of atomic motion to fulfill the 2nd law (that entropy of the full system cannot decrease) and the 3rd law (that you cannot reach the state of zero entropy). And yet, as Boltzmann understood back than and many physics students understand today, those laws are FAPP laws that do not require fine tuning at all.

Indeed, in Bohmian mechanics it is well understood why nonlocality does not allow (in the FAPP sense) communication faster than light. It is a consequence of quantum equilibrium, for more details see e.g. https://arxiv.org/abs/quant-ph/0308039
 
  • #61
Nullstein said:
If you insist that by mechanism, we mean an explanation in terms of hidden variables. I rather except that we need to refine our notion of what constitutes a mechanism. Before Einstein, people already knew the Lorentz transformations but they were stuck believing in an ether explanation. Even today, there are some exots left who can't wrap their heads around the modern interpretation. I think with quantum theory, we are in a quite similar situation. We know the correct formulas, but we are stuck in classical thinking. I guess we need another Einstein to sort this out.
Yes, and even Einstein tried "mechanisms" aka "constructive efforts" to explain time dilation and length contraction before he gave up and went with his principle explanation (relativity principle and light postulate). Today, most physicists are content with that principle explanation without a constructive counterpart (no "mechanisms", see this article), i.e., what you call "the modern interpretation." So, if you're happy with that principle account of SR, you should be happy with our principle account of QM (relativity principle and "Planck postulate"). All of that is explained in "No Preferred Reference Frame at the Foundation of Quantum Mechanics". Here is my APS March Meeting talk on that paper (only 4 min 13 sec long) if you don't want to read the paper.
 
  • #62
Nullstein said:
I rather except that we need to refine our notion of what constitutes a mechanism.'
I agree that this is a major insight. The notion of "mechanism" in current paradigm heavily relies on state spaces and timeless laws, ie equation based models. In models of interacting parts that are learning, I think the "causation" is different, it is essentially the rules of learning and organisation of the parts. This thinking it totally absent in when one talkes abouto the bell inequality. It requires a different paradigm I think.

/Fredrik
 
  • #63
Demystifier said:
For me, it is like a 19th century critique of atomic explanation of thermodynamics based on the idea that it requires fine tuning of the rules of atomic motion to fulfill the 2nd law (that entropy of the full system cannot decrease) and the 3rd law (that you cannot reach the state of zero entropy). And yet, as Boltzmann understood back than and many physics students understand today, those laws are FAPP laws that do not require fine tuning at all.
No, that's very different. Boltzmann's theory had measurable implications. He did not claim that nature conspires to hide atoms from humans even in principle. If we look close enough, we can observe their effects and make use of them, even though that may not have been technologically possible back in Boltzmann's days. In stark contrast, any theory that reproduces QM, must obey the no communication theorem and prohibit the observation and the use of those non-local effects forever.
Demystifier said:
Indeed, in Bohmian mechanics it is well understood why nonlocality does not allow (in the FAPP sense) communication faster than light. It is a consequence of quantum equilibrium, for more details see e.g. https://arxiv.org/abs/quant-ph/0308039
It doesn't matter though, whether it is well understood. That doesn't make it any less fine tuned.
 
  • #64
RUTA said:
Yes, and even Einstein tried "mechanisms" aka "constructive efforts" to explain time dilation and length contraction before he gave up and went with his principle explanation (relativity principle and light postulate). Today, most physicists are content with that principle explanation without a constructive counterpart (no "mechanisms", see this article), i.e., what you call "the modern interpretation."
I wouldn't consider modern special relativity less constructive than classical mechanics. SR replaces the Galilei symmetry by Poincare symmetry. There is no reason why Galilei symmetry should be preferred, nature just made a different choice. We don't get around postulating some basic principles in any case.
RUTA said:
So, if you're happy with that principle account of SR, you should be happy with our principle account of QM (relativity principle and "Planck postulate"). All of that is explained in "No Preferred Reference Frame at the Foundation of Quantum Mechanics". Here is my APS March Meeting talk on that paper (only 4 min 13 sec long) if you don't want to read the paper.
For me, a satisfactory explanation of QM would have to explain, why we have to use this weird Hilbert space formalism and non-commutative algebras in the first place. Sure, we have learned to perform calculations with it, but at the moment, it's not something a sane person would come up with if decades of weird experimental results hadn't forced them to. I hope someone figures this out in my lifetime.
 
  • #65
Nullstein said:
I wouldn't consider modern special relativity less constructive than classical mechanics. SR replaces the Galilei symmetry by Poincare symmetry. There is no reason why Galilei symmetry should be preferred, nature just made a different choice. We don't get around postulating some basic principles in any case.

For me, a satisfactory explanation of QM would have to explain, why we have to use this weird Hilbert space formalism and non-commutative algebras in the first place. Sure, we have learned to perform calculations with it, but at the moment, it's not something a sane person would come up with if decades of weird experimental results hadn't forced them to. I hope someone figures this out in my lifetime.
SR is a principle theory, per Einstein. Did you read the Mainwood article with Einstein quotes?

And, yes, the information-theoretic reconstructions of QM, whence the relativity principle applied to the invariant measurement of h, give you the Hilbert space formalism with its non-commutative algebra. So, it's figured out in principle fashion. Did you read our paper?
 
  • #66
RUTA said:
SR is a principle theory, per Einstein. Did you read the Mainwood article with Einstein quotes?
Yes, I read the article, but I don't agree with it. I don't agree that there is such a distinction between principle theories and a constructive theories. Every theory must be based on some fundamental axioms. SR just has different ones than classical mechanics.
RUTA said:
And, yes, the information-theoretic reconstructions of QM, whence the relativity principle applied to the invariant measurement of h, give you the Hilbert space formalism with its non-commutative algebra. So, it's figured out in principle fashion. Did you read our paper?
I watched the video, but then I saw ket vectors suddenly appearing from one slide to the next. Also, it was only concerned with spin states and rotations, not with recovering the full QM formalism. I'm aware that there are people who try to derive QM from information theoretic principles, but personally, I'm not satisfied by the axioms that they came up with so far. To me, they are not more intuitive than the ones von Neumann came up with. Personally, I'm hoping for something far more intuitive.
 
  • #67
Nullstein said:
Demystifier said:
How is non-locality anthropocentric?
Because communication is an anthropocentric notion. If the theory needs to be fine tuned to make communication impossible, this is thus a very anthropocentric requirement.
That is an impressively nice thought. Much nicer and deeper than your later elaboration:
Nullstein said:
And that makes it anthropocentric, because the Born rule captures everything that is in principle accessible to humans while it hides everything that is inaccessible to humans in principle (such as details about hidden variables).

A "strange" discussion on "true randomness" on the FOM list just ended, with conclusions like:
Alex Galicki said:
It seems that in the current discussion on "true randomness", there is so much confusion that it would take way too much time for anyone to clear that up in a reasonable amount of time.
...
C) One simple but important idea from Algorithmic Randomness is that randomness is always relative, there is no "true randomness".
My own attempted definition of "a truly random phenomenon" (in the context of gambling and games, anthropocentric and relative) also occurred in that discussion, contrasted to quantum randomness:
  • Quantum mechanics: The randomness itself is nonlocal, and it must be really random, because otherwise this non-locality could be used for instantaneous signal transmission.
  • Gambling and games: A truly random phenomenon must produce outcomes that are completely unpredictable. And not just unpredictable for you and me, but unpredictable for both my opponents and proponents.
The nice thing about your thought is that quantum randomness should better not be anthropocentric and relative. Or maybe it should, because "true randomness" is. This could point towards ideas like Gell-Mann's and Hartle's "information gathering and utilizing systems (IGUSs)" or Rovelli's relational quantum mechanics.
 
Last edited:
  • #68
gentzen said:
One simple but important idea from Algorithmic Randomness is that randomness is always relative, there is no "true randomness"
This is a key insight IMO. Even though it is a big mess, the conceptual value of the insight is obvious I think only once you start considering the structure and action, ande the rules of interacting "IGUS" / agents / observers. This is where I want to dig further.

This has many relations to unification issues. For example, is Hawking raditation random? Is a big black hole "more random" than a microscopic one? If so, why?

/Fredrik
 
  • #69
Nullstein said:
It doesn't matter though, whether it is well understood. That doesn't make it any less fine tuned.
So where exactly is fine tuning in the Bohmian theory?
 
  • #70
Nullstein said:
As can be seen in this paper, such a non-local explanation of entanglement requires even more fine tuning than superdeterminism.
I can't find this claim in the paper. Where exactly does the paper say that?
 
  • #71
Demystifier said:
So where exactly is fine tuning in the Bohmian theory?
The fine tuning is in the quantum equilibrium assumption. But maybe Valentini's version is able to overcome it the fine tuning, at the cost and benefit of predicting violations of present quantum theory without fine tuning. There's a discussion by Wood and Spekkens on p21 of https://arxiv.org/abs/1208.4119.
 
  • Like
Likes Demystifier
  • #72
Demystifier said:
So where exactly is fine tuning in the Bohmian theory?
See atyy's post (although I don't agree that Valentini's version fixes this).
Demystifier said:
I can't find this claim in the paper. Where exactly does the paper say that?
He shows that his superdeterministic model requires only 1/15 bit of shared information to reproduce QM, while a non-local model requires 1 bit.
 
  • #73
Nullstein said:
The system is made of particles that evolve just fine according to Hamiltons equations of motion. There is no fundamental need for a quantity like temperature. It's invented by humans to quantify an aggregate property of the system.
The new states and concept arr evolved IN HUMANS as it increases our fitness and responsiveness and learning. Even a human has to act under information processing constraints. The model of simulating hamilton dynamics from initial values as we know is not viable due to deterministic chaos. This is a key insight i think to understand how relations evolve and persist. Non commutative logic to me is a form fo datacompression that nature likely evolved because its the only way to achieve stability. This in itself has imo nothing todo with humans. The IGUS or agents are themsleves just subsytems of matter.

/Fredrik
 
  • #74
Nullstein said:
See atyy's post (although I don't agree that Valentini's version fixes this).
OK, so why do you not agree that Valentini's version fixes this?
 
  • #75
RUTA said:
So, how do we explain violations of Bell's inequality without nonlocal interactions, violations of Statistical Indendence, or "shut up and calculate" (meaning "the formalism of QM works, so it's already 'explained'")? Our principle explanation of Bell state entanglement doesn't entail an ontology at all, so there is no basis for nonlocality or violations of Statistical Independence. And, it is the same (relativity) principle that explains time dilation and length contraction without an ontological counterpart and without saying, "the Lorentz transformations work, so it's already 'explained'". So, we do have an explanation of Bell state entanglement (and therefore of the Tsirelson bound).

A explanation for the "Tsirelson Bound" ??
:oops:

"An explanation is a set of statements, a set of facts, which states the causes, context, and consequences of those facts. It may establish rules or laws"
.
 
  • #76
physika said:
A explanation for the "Tsirelson Bound" ??
:oops:

"An explanation is a set of statements, a set of facts, which states the causes, context, and consequences of those facts. It may establish rules or laws"
.
Yes, here is the explanatory sequence:

1. No preferred reference frame + h --> average-only projection for qubits
2. Average-only projection for qubits --> average-only conservation per the Bell states
3. Average-only conservation per the Bell states --> Tsirelson bound

In short, the Tsirelson bound obtains due to "conservation per no preferred reference frame".
 
  • #78
  • #79
DrChinese said:
The modern entanglement swapping examples have entangled particles which never exist in a common backward light cone - so they cannot have had an opportunity for a local interaction or synchronization. The A and B particles are fully correlated like any EPR pair, even though far apart and having never interacted*. That is quantum nonlocality, plain and simple. There are no "local" explanations for entanglement swapping of this type (that I have seen), of course excepting interpretations that claim to be fully local and forward in time causal (which I would probably dispute). Some MWI proponents makes this claim, although not all.*Being from independent photon sources, and suitably distant to each other.
Hi ! I hope these questions haven't been asked a lot. I checked another thread, but couldn't find exactly this. what you said here was basically what I interpreted yours and many other replies in it: that particle A and B have non-local transfer. What about, say, a computer program of a universe like ours: it would seem like there's no local transfer in it, but there is (electrons from the program). Would such hidden variables be necessarily detectable in experiments in our universe (Bell, von Neumann Existence Theorem) if these hidden local variables existed like a program (or superdeterminism) as well? If so, does superdeterminism presuppose no contact info-transfer?

Another question: if the info transfer is non-local, does that mean information can travel outside space (and thus outside time)? What I mean is, that it basically comes out of nowhere: there doesn't need to be any CoA - Contact of Action, it simply "knows"?

Finally, is the entanglement state change between particle A and B instantaneous? How does this relate to causality if there is no time difference? Is it perhaps like a Halting Problem? (this last question is a bit vague and strange so feel free to say Yes/No/Idk, I really can't explain it better, more like an idea)
 
Last edited:
  • #80
I am a novice here. But I have been studying about SD for about 8 months now. Going back to the basic issue Einstein had with non-locality…. No one seems to be addressing a point in this discussion - assume that causality cannot travel faster than the speed of light (we have never observed violation of speed of light, and without particles we have no causality).

We see measurements that SEEM to violate locality, BUT we do have a way out by SD (John Bell himself said so). We do believe in symmetry laws, conservation of energy, momentum, etc. We CAN predict in a real sense where planets, baseballs, etc will be in the future. I think, prior to QM we would all agree that if all factors were considered in initial conditions and all particles (heat=photons, etc) could be taken into account we could predict a small system in its entirety. I have never seen an argument where someone says “conservation of energy is only approximate”, because they would need to demonstrate this experimentally.

So given that conservation laws are all assumed to be exact (pre QM), what happened to the argument that all events are pre-determined? Not predictable as that is impossible, but pre-determined given the belief that we have discovered all the laws of motion and with the assumption that particles are real (again particles became non-real after QM, only when the wave function was available did we consider that ball are not real.)

So here is my question: If the Universe is not pre-determined, where did the differences in energy, momentum, etc go? A non-pre-determined Universe needs to violate conservation laws somewhere. Either we have not discovered all the laws, or there is a leak in Energy somewhere we have not discovered yet. Do we not believe our own laws of Physics?

If we

1. Assume conservation laws hold everywhere for all time and are exact
2. Assume speed of light is universal.
3. Assume causality depends on particles

then the Universe MUST be pre-determined, there is no way around it that we have actuality observed. NONE. SD, it seems to me does not need to prove itself, it seems to me that SD must be disproven, as it is an obvious result of the above assumptions. Which of the 3 assumptions would we abandon?

Now enter QM. We see entanglement experiments confirm QM predictions, but we also believe in the above 3 assumptions, then why do we need to introduce non-local interpretations at all - SD is the way the Universe works based on the 3 assumptions, what is the problem? Based on what I have read, the only argument seems to be the disbelief that we are not free to do Science. That is the only argument I have heard - that we FEEL that we are free to make decisions on our own and this somehow invalidates all our scientific evidence for SD.

But this was a problem early on - yes we SEE that our laws work, we understand we cannot take into account all factors when trying to predict an outcome - but the ASSUMPTION underneath was that there are laws that govern the Universe and therefore, unless someone can demonstrate how these laws are violated the Universe is pre-determined down to the last photon.

There are those who think we live in a simulation - pre-determined again. No evidence for that really, BUT the world view helps explain why we THINK we have free will. An AI living in a simulation may go through its life believing it had free will without ever realizing otherwise. If we ditch the FEELING that we are making free decisions, then SD is absolutely the simplest way to explain the seemingly non-local results of QM. Not “many worlds” which has no observational evidence. I do not see an alternative to SD that is consistent with all our laws and measurements, and the AI worldview easily explains at least ONE way we can be fooled into believing we have free will. But in any case, FEELINGS have been the bane of science forever.

Non-local QM theories are not necessary, as far as I can tell, if SD is considered an option. If AIs and simulations had been around BEFORE QM was discovered I do not think that non-local theories would ever have been seriously considered.
 
  • #81
kclubb said:
I am a novice here. But I have been studying about SD for about 8 months now. Going back to the basic issue Einstein had with non-locality…. No one seems to be addressing a point in this discussion - assume that causality cannot travel faster than the speed of light (we have never observed violation of speed of light, and without particles we have no causality).

We see measurements that SEEM to violate locality, BUT we do have a way out by SD (John Bell himself said so). We do believe in symmetry laws, conservation of energy, momentum, etc. We CAN predict in a real sense where planets, baseballs, etc will be in the future. I think, prior to QM we would all agree that if all factors were considered in initial conditions and all particles (heat=photons, etc) could be taken into account we could predict a small system in its entirety. I have never seen an argument where someone says “conservation of energy is only approximate”, because they would need to demonstrate this experimentally.

So given that conservation laws are all assumed to be exact (pre QM), what happened to the argument that all events are pre-determined? Not predictable as that is impossible, but pre-determined given the belief that we have discovered all the laws of motion and with the assumption that particles are real (again particles became non-real after QM, only when the wave function was available did we consider that ball are not real.)

So here is my question: If the Universe is not pre-determined, where did the differences in energy, momentum, etc go? A non-pre-determined Universe needs to violate conservation laws somewhere. Either we have not discovered all the laws, or there is a leak in Energy somewhere we have not discovered yet. Do we not believe our own laws of Physics?

If we

1. Assume conservation laws hold everywhere for all time and are exact
2. Assume speed of light is universal.
3. Assume causality depends on particles

then the Universe MUST be pre-determined, there is no way around it that we have actuality observed. NONE. SD, it seems to me does not need to prove itself, it seems to me that SD must be disproven, as it is an obvious result of the above assumptions. Which of the 3 assumptions would we abandon?

Now enter QM. We see entanglement experiments confirm QM predictions, but we also believe in the above 3 assumptions, then why do we need to introduce non-local interpretations at all - SD is the way the Universe works based on the 3 assumptions, what is the problem? Based on what I have read, the only argument seems to be the disbelief that we are not free to do Science. That is the only argument I have heard - that we FEEL that we are free to make decisions on our own and this somehow invalidates all our scientific evidence for SD.

But this was a problem early on - yes we SEE that our laws work, we understand we cannot take into account all factors when trying to predict an outcome - but the ASSUMPTION underneath was that there are laws that govern the Universe and therefore, unless someone can demonstrate how these laws are violated the Universe is pre-determined down to the last photon.

There are those who think we live in a simulation - pre-determined again. No evidence for that really, BUT the world view helps explain why we THINK we have free will. An AI living in a simulation may go through its life believing it had free will without ever realizing otherwise. If we ditch the FEELING that we are making free decisions, then SD is absolutely the simplest way to explain the seemingly non-local results of QM. Not “many worlds” which has no observational evidence. I do not see an alternative to SD that is consistent with all our laws and measurements, and the AI worldview easily explains at least ONE way we can be fooled into believing we have free will. But in any case, FEELINGS have been the bane of science forever.

Non-local QM theories are not necessary, as far as I can tell, if SD is considered an option. If AIs and simulations had been around BEFORE QM was discovered I do not think that non-local theories would ever have been seriously considered.
As I showed in this Insight, the indeterminism we have in QM is unavoidable according to the relativity principle. And, yes, that means conservation of spin angular momentum is not exact when Alice and Bob are making different measurements. Conservation holds only on average (Bob saying Alice must average her results and Alice saying the same about Bob) when they make different measurements.
 
  • #82
kclubb said:
without particles we have no causality
This is not correct; field theories that do not contain any particles still have causality.
 
  • #83
kclubb said:
A non-pre-determined Universe needs to violate conservation laws somewhere.
No, it doesn't. Events that are not pre-determined can still happen in a way that obeys conservation laws.
 
  • #84
RUTA said:
that means conservation of spin angular momentum is not exact when Alice and Bob are making different measurements. Conservation holds only on average
I don't think this claim can be asserted as fact at our current level of knowledge. When we make measurements on quantum systems, we bring into play huge sinks of energy and momentum (measuring devices and environments). But we don't measure the change in energy and momentum of the sinks. We only look at the measured systems. But if a measurement takes place, the measured systems are not closed systems and we should not in general expect them to obey conservation laws in isolation; they can exchange energy and momentum with measuring devices and environments. To know that conservation laws were violated we would have to include the changes in energy and momentum of the measuring devices and environments. But we don't. So I don't see that we have any basis to assert what you assert in the above quote. All we can say is that we have no way of testing conservation laws for such cases at our current level of technology.
 
  • #85
kclubb said:
If we

1. Assume conservation laws hold everywhere for all time and are exact
2. Assume speed of light is universal.
3. Assume causality depends on particles

then the Universe MUST be pre-determined...

You are completely ignoring Bell's Theorem. I realize that Bell himself has mentioned Superdeterminism (SD) as an "out" for his own theorem (as you point out). However, SD requires substantially more assumptions than the 3 you have above. In other words: unless you have substantially more (and progressively more outrageous) assumptions than those 3, then at least one of those 3 must not hold true.

And I get tired of saying this, but: There is no candidate SD theory in existence. By this I mean: one which explains why any choice of measurement basis leads to a violation of a Bell Inequality, in any of the following scenarios:

a. Measurement basis does not vary between pairs. This is the most common Bell test, and violates a Bell inequality.

b. Measure basis does vary:
i. By random selection, such as by computers or by radioactive samples. This too has been done, and violates a Bell inequality.
ii. By human choice (such as the Big Bell test, and violates a Bell inequality).

If there were such a theory, it could easily be falsified by suitable variations on the above. Further, there is no particular rational to invoke SD as an explanation for observed results in the area of entanglement, but no where else in all of science. You may as well claim that the "true" value of c is 2% higher than the observed value... due to Superdeterminism.
 
  • #86
RUTA said:
As I showed in this Insight, the indeterminism we have in QM is unavoidable according to the relativity principle. And, yes, that means conservation of spin angular momentum is not exact when Alice and Bob are making different measurements. Conservation holds only on average (Bob saying Alice must average her results and Alice saying the same about Bob) when they make different measurements.
Wikipedia claims that John Bell made this statement in the 1980s (I have not tracked down this)
There is a way to escape the inference of superluminal speeds and spooky action at a distance. But it involves absolute determinism in the universe, the complete absence of free will. So your explanation goes beyond what John Bell claimed in that interview - that SD does not eliminate the need for non-locality?
 
  • #87
PeterDonis said:
This is not correct; field theories that do not contain any particles still have causality.
Interesting. What would be an example? My understanding is that all forces have corresponding particles - the Standard Model.
 
  • #88
PeterDonis said:
No, it doesn't. Events that are not pre-determined can still happen in a way that obeys conservation laws.
We have to usually apply multiple laws simultaneously in classical Physics to arrive at a unique trajectory (energy + momentum etc). Are you speaking of not being able to measure two properties at the same time exactly, satisfying one law but not knowing the exact value for another property? In this case we just cannot measure EXACTLY the values of both momentum and energy, say, at the same time and therefore cannot say one way or the other in order to to apply the laws at the same time. But just because we cannot measure the two properties at the same time does not mean that the event happened in a non-deterministic way.

In any case, when you say “Events that are not pre-determined” you are already biasing your argument, because the argument is about whether the Universe is pre-determined, and you are already claiming that it is not in your answer. What you need to show is an example where a conservation law was VIOLATED when the observation is made.
 
Last edited:
  • #89
DrChinese said:
You are completely ignoring Bell's Theorem. I realize that Bell himself has mentioned Superdeterminism (SD) as an "out" for his own theorem (as you point out). However, SD requires substantially more assumptions than the 3 you have above. In other words: unless you have substantially more (and progressively more outrageous) assumptions than those 3, then at least one of those 3 must not hold true.

And I get tired of saying this, but: There is no candidate SD theory in existence. By this I mean: one which explains why any choice of measurement basis leads to a violation of a Bell Inequality, in any of the following scenarios:

a. Measurement basis does not vary between pairs. This is the most common Bell test, and violates a Bell inequality.

b. Measure basis does vary:
i. By random selection, such as by computers or by radioactive samples. This too has been done, and violates a Bell inequality.
ii. By human choice (such as the Big Bell test, and violates a Bell inequality).

If there were such a theory, it could easily be falsified by suitable variations on the above. Further, there is no particular rational to invoke SD as an explanation for observed results in the area of entanglement, but no where else in all of science. You may as well claim that the "true" value of c is 2% higher than the observed value... due to Superdeterminism.
So we do need to find a valid theory that works. Sean Carrol advocates for the “many worlds” interpretation. Speculating that every time a “measurement” is made a new Universe comes into existence. Is this one of the “acceptable” and not “outrageous” assumptions that work for you? Because there is a real need to deal with non-locality. I am not studied on all the “outrageous” assumptions you speak of, but a new Universe being created every time there is a measurement seems outrageous to me, compared to a an deterministic Universe we have give up free will for. How would we ever test many worlds? I am not refuting you statements, but there are Physicists who believe SD is viable, including Bell himself. I hope to find the time to read through the original works that Bell started with, when I retire and I have the time perhaps. But I have to believe that, since there are legitimate scientists who believe SD is a possible reality, that it is a least POSSIBLE a theory can be developed. It just seems odd that most of the arguments I have read by Physicists against SD are emotional opinionated arguments dealing with free will, and “many worlds” is considered over SD as a better alternative, but Bell recognizing SD as a possible loophole to his theorem. Did he just not think it through before he made that statement? Is there something in the points that you make above the John Bell was not aware of? Specifically something that has been discovered after Bell that invalidates his claim?
 
  • #90
kclubb said:
all forces have corresponding particles - the Standard Model.
The Standard Model is a quantum field theory. Certain quantum field states are described as "particles", but there are many quantum field states that cannot be described that way. The fundamental entities are fields.
 
  • #91
kclubb said:
What you need to show is an example where a conservation law was VIOLATED when the observation is made.
No, you need to show that a conservation law must be violated if the universe is not fully 100% deterministic because you are the one who is making that claim. I am simply pointing out that you have not shown that. You have simply assumed it, and you can't just assume it. You have to show it.

The rest of your post is irrelevant to mine because I did not say any of the things you are talking about.
 
  • #92
kclubb said:
Speculating that every time a “measurement” is made a new Universe comes into existence.
This is not what the MWI says. The "universe" in the MWI is the universal wave function, and there is always just one universal wave function. The wave function doesn't "split" when a measurement is made; that would violate unitary evolution, and the MWI says that the wave function always evolves in time by unitary evolution.
 
  • #93
kclubb said:
a deterministic Universe we have give up free will for
You are aware that the MWI is 100% deterministic, correct?
 
  • #94
kclubb said:
I am not refuting you statements, but there are Physicists who believe SD is viable, including Bell himself.
At the last resort, “questions of faith” are irrelevant for doing physics. All one needs to carry out physics are observations that can be used to construct and test models – on the basis of the observed phenomena.
 
  • #95
PeterDonis said:
I don't think this claim can be asserted as fact at our current level of knowledge. When we make measurements on quantum systems, we bring into play huge sinks of energy and momentum (measuring devices and environments). But we don't measure the change in energy and momentum of the sinks. We only look at the measured systems. But if a measurement takes place, the measured systems are not closed systems and we should not in general expect them to obey conservation laws in isolation; they can exchange energy and momentum with measuring devices and environments. To know that conservation laws were violated we would have to include the changes in energy and momentum of the measuring devices and environments. But we don't. So I don't see that we have any basis to assert what you assert in the above quote. All we can say is that we have no way of testing conservation laws for such cases at our current level of technology.
My claim is a mathematical fact that follows from the Bell state formalism alone. It has nothing to do with experimental uncertainty.
 
  • #96
RUTA said:
My claim is a mathematical fact that follows from the Bell state formalism alone.
As I said, this can't be correct because during the measurement process angular momentum is exchanged between the measured particles, which the formalism you refer to describes, and the measuring devices and environment, which the formalism does not describe. So the formalism is incomplete and cannot support any claims about conservation laws.

RUTA said:
It has nothing to do with experimental uncertainty.
My point has nothing to do with experimental uncertainty. It has to do with the fact that during measurement, the measured particles are open systems, not closed systems.
 
  • #97
PeterDonis said:
As I said, this can't be correct because during the measurement process angular momentum is exchanged between the measured particles, which the formalism you refer to describes, and the measuring devices and environment, which the formalism does not describe. So the formalism is incomplete and cannot support any claims about conservation laws.My point has nothing to do with experimental uncertainty. It has to do with the fact that during measurement, the measured particles are open systems, not closed systems.
Look at a Bell spin triplet state in the symmetry plane. When Alice and Bob both measure in the same direction, they both get the same outcome, +1 or -1. That is due to conservation of spin angular momentum. Now suppose Bob measures at an angle ##\theta## with respect to Alice and they do many trials of the experiment. When Alice partitions the data according to her +1 or -1 results, she expects Bob to measure ##+\cos{\theta}## or ##-\cos{\theta}##, respectively, because she knows he would have also measured +1 or -1 if he had measured in her direction. Therefore, she knows his true, underlying value of spin angular momentum is +1 or -1 along her measurement direction, so he should be measuring the projection of that true, underlying value along his measurement direction at ##\theta## to conserve spin angular momentum. Of course, Bob can partition the data according to his ##\pm 1## equivalence relation and say it is Alice who should be measuring ##\pm \cos{\theta}## in order to conserve spin angular momentum. It is impossible to conserve spin angular momentum exactly according to either Alice or Bob because they both always measure ##\pm 1## (in accord with the relativity principle), never a fraction. However, their results do average ##\pm \cos{\theta}## under these data partitions. It has nothing to do with momentum transfer with the measurement device. All of this follows strictly from the Bell spin state formalism.
 
  • #98
RUTA said:
It is impossible to conserve spin angular momentum exactly according to either Alice or Bob because they both always measure ##\pm 1## (in accord with the relativity principle), never a fraction. However, their results do average ##\pm \cos{\theta}## under these data partitions. It has nothing to do with momentum transfer with the measurement device.
Sorry, these statements are simply false as a matter of what actually happens in an experiment. Measurement involves interaction between the measured system and the measuring device. That interaction can exchange conserved quantities. So it is simply physically invalid to only look at the measured systems when evaluating conservation laws.
 
  • Like
Likes gentzen and vanhees71
  • #99
PeterDonis said:
Sorry, these statements are simply false as a matter of what actually happens in an experiment. Measurement involves interaction between the measured system and the measuring device. That interaction can exchange conserved quantities. So it is simply physically invalid to only look at the measured systems when evaluating conservation laws.
The Bell spin states obtain due to conservation of spin angular momentum without regard to any loss to the environment. Therefore, the theoretical results I shared are independent of experimental uncertainties, which is what you're trying to invoke.
 
  • #100
RUTA said:
The Bell spin states obtain due to conservation of spin angular momentum without regard to any loss to the environment.
How do you know? You're not measuring the exchange of angular momentum with the environment. That doesn't mean you can assume it doesn't happen. It means you don't know.

RUTA said:
the theoretical results I shared are independent of experimental uncertainties, which is what you're trying to invoke.
I don't know where you're getting this from. There can't be any experimental uncertainty in something that's not being measured. The fact that measurement involves interaction between the measured system and the measuring device is basic QM. But it does not imply that all aspects of that interaction are captured in the measurement result. In fact they practically never are.
 
  • Like
Likes gentzen and vanhees71
Back
Top