In summary: Sabine Hossenfelder in her video, argues that superdeterminism should be taken seriously, indeed it is what quantum mechanics (QM) is screaming for us to understand about Nature. According to her video, superdeterminism simply means the particles must have known at the outset of their trip whether to go through the right slit, the left slit, or both slits, based on what measurement was going to be done on them.Superdeterminism is a controversial topic in the foundations community, as explained in this video by Sabine Hossenfelder. She argues that superdeterminism should be taken seriously, indeed it is what quantum mechanics (QM) is screaming for us to
  • #36
martinbn said:
Can you clarify. How can there be no common overlap? Any two past lightcones overlap.
You'd think so! But these photons never existed in a common light cone because their lifespan is short. In fact, in this particular experiment, one photon was detected (and ceased to exist) BEFORE its entangled partner was created.

In this next experiment, the entangled photon pairs are spatially separated (and did coexist for a period of time). However, they were created sufficiently far apart that they never occupied a common light cone.

High-fidelity entanglement swapping with fully independent sources (2009)
https://arxiv.org/abs/0809.3991
"Entanglement swapping allows to establish entanglement between independent particles that never interacted nor share any common past. This feature makes it an integral constituent of quantum repeaters. Here, we demonstrate entanglement swapping with time-synchronized independent sources with a fidelity high enough to violate a Clauser-Horne-Shimony-Holt inequality by more than four standard deviations. The fact that both entangled pairs are created by fully independent, only electronically connected sources ensures that this technique is suitable for future long-distance quantum communication experiments as well as for novel tests on the foundations of quantum physics."

And from a 2009 paper that addresses the theoretical nature of entanglement swapping with particles with no common past, here is a quote that indicates that in fact this entanglement IS problematic for any theory claiming the usual locality (local causality):

"It is natural to expect that correlations between distant particles are the result of causal influences originating in their common past — this is the idea behind Bell’s concept of local causality [1]. Yet, quantum theory predicts that measurements on entangled particles will produce outcome correlations that cannot be reproduced by any theory where each separate outcome is locally determined by variables correlated at the source. This nonlocal nature of entangled states can be revealed by the violation of Bell inequalities.

"However remarkable it is that quantum interactions can establish such nonlocal correlations, it is even more remarkable that particles that never directly interacted can also become nonlocally correlated. This is possible through a process called entanglement swapping [2]. Starting from two independent pairs of entangled particles, one can measure jointly one particle from each pair, so that the two other particles become entangled, even though they have no common past history. The resulting pair is a genuine entangled pair in every aspect, and can in particular violate Bell inequalities.

"Intuitively, it seems that such entanglement swapping experiments exhibit nonlocal effects even stronger than those of usual Bell tests. To make this intuition concrete and to fully grasp the extent of nonlocality in entanglement swapping experiments, it seems appropriate to contrast them with the predictions of local models where systems that are initially uncorrelated are described by uncorrelated local variables. This is the idea that we pursue here."


Despite the comments from Nullstein to the contrary, such swapped pairs are entangled without any qualification - as indicated in the quote above.
 
Physics news on Phys.org
  • #37
DrChinese said:
You'd think so! But these photons never existed in a common light cone because their lifespan is short. In fact, in this particular experiment, one photon was detected (and ceased to exist) BEFORE its entangled partner was created.
It is probably the phrasing that you use that confuses me, but I still don't understand. For example the past light cone of the event = production of the second pair of photons, contains the whole life of the first photon, that no longer exists. All the possible light cones that you can pick intersect.
 
  • #38
Nullstein said:
I find both non-locality and superdeterminism equally unappealing. Both mechanism require fine-tuning.
How does non-locality require fine tuning?

Nullstein said:
However, if I had to choose, I would probably pick superdeterminism, just because it is less anthropocentric.
How is non-locality anthropocentric?

Nullstein said:
There are non-local cause and effect relations everywhere but nature conspires in such a way to prohibit communication? I can't make myself believe that, but that's just my personal opinion.
There is no any conspiracy. See https://www.physicsforums.com/threa...unterfactual-definiteness.847628/post-5319182
 
  • Like
Likes gentzen
  • #39
  • #40
RUTA said:
Well, my understanding is that the experiments will check for violations of randomness where QM predicts randomness.
Then the proponents of “superdeterminism” should tackle the radioactive decay.
 
  • Like
Likes jbergman and vanhees71
  • #41
martinbn said:
If I understand you correctly you are saying that it is (or might be) possible but it has not been found yet.
In principle, yes.

martinbn said:
Doesn't this contradict the no communication theorem?
No, because in theories such as Bohmian mechanics, the no communication theorem is a FAPP (for all practical purposes) theorem.

Roughly, this is like the law of large numbers. Is it valid for ##N=1000##? In principle, no. In practice, yes.
 
  • #42
"Superdeterminism as a way to resolve the mystery of quantum entanglement is generally not taken seriously in the foundations community, as explained in this video by Sabine Hossenfelder (posted in Dec 2021). In her video, she argues that superdeterminism should be taken seriously, indeed it is what quantum mechanics (QM) is screaming for us to understand about Nature. According to her video per the twin-slit experiment, superdeterminism simply means the particles must have known at the outset of their trip whether to go through the right slit, the left slit, or both slits, based on what measurement was going to be done on them."

Why is "superdeterminism" not taken seriously. As Shimony, Clauser and Horne put it:

"In any scientific experiment in which two or more variables are supposed to be randomly selected, one can always conjecture that some factor in the overlap of the backwards light cones has controlled the presumably random choices. But, we maintain, skepticism of this sort will essentially dismiss all results of scientific experimentation. Unless we proceed under the assumption that hidden conspiracies of this sort do not occur, we have abandoned in advance the whole enterprise of discovering the laws of nature by experimentation."
 
  • Like
Likes PeroK
  • #43
DrChinese said:
You'd think so! But these photons never existed in a common light cone because their lifespan is short. In fact, in this particular experiment, one photon was detected (and ceased to exist) BEFORE its entangled partner was created.

In this next experiment, the entangled photon pairs are spatially separated (and did coexist for a period of time). However, they were created sufficiently far apart that they never occupied a common light cone.

High-fidelity entanglement swapping with fully independent sources (2009)
https://arxiv.org/abs/0809.3991
"Entanglement swapping allows to establish entanglement between independent particles that never interacted nor share any common past. This feature makes it an integral constituent of quantum repeaters. Here, we demonstrate entanglement swapping with time-synchronized independent sources with a fidelity high enough to violate a Clauser-Horne-Shimony-Holt inequality by more than four standard deviations. The fact that both entangled pairs are created by fully independent, only electronically connected sources ensures that this technique is suitable for future long-distance quantum communication experiments as well as for novel tests on the foundations of quantum physics."

And from a 2009 paper that addresses the theoretical nature of entanglement swapping with particles with no common past, here is a quote that indicates that in fact this entanglement IS problematic for any theory claiming the usual locality (local causality):

"It is natural to expect that correlations between distant particles are the result of causal influences originating in their common past — this is the idea behind Bell’s concept of local causality [1]. Yet, quantum theory predicts that measurements on entangled particles will produce outcome correlations that cannot be reproduced by any theory where each separate outcome is locally determined by variables correlated at the source. This nonlocal nature of entangled states can be revealed by the violation of Bell inequalities.

"However remarkable it is that quantum interactions can establish such nonlocal correlations, it is even more remarkable that particles that never directly interacted can also become nonlocally correlated. This is possible through a process called entanglement swapping [2]. Starting from two independent pairs of entangled particles, one can measure jointly one particle from each pair, so that the two other particles become entangled, even though they have no common past history. The resulting pair is a genuine entangled pair in every aspect, and can in particular violate Bell inequalities.
None of these articles are in contradiction to what I said. Again, nobody doubts that entanglement swapping produces Bell pairs that violate Bell's inequality. The question is: Does entanglement swapping add to the mystery? And the answer is: It does not. The article by Deutsch applies to all these experiments and therefore shows that nothing mysterious beyond the usual mystery is going on.
DrChinese said:
"Intuitively, it seems that such entanglement swapping experiments exhibit nonlocal effects even stronger than those of usual Bell tests.
You should rather have highlighted the word "intuitively," because one may come to this conclusion intuitively. But a more complete analysis just shows that nothing about entanglement swapping is any stronger or any more mysterious than ordinary Bell pairs.
DrChinese said:
Despite the comments from Nullstein to the contrary, such swapped pairs are entangled without any qualification - as indicated in the quote above.
You misrepresent what I wrote. Of course the entanglement swapped pairs are entangled, I said that multiple times. It's just that no additional mystery is added by the process of swapping. All the mystery is concentrated in the initial Bell pairs themselves. The swapping processs has a local explanation.
 
  • #44
I have never understood why there is such a visceral oppositional response to superdeterminism while the Many Worlds Interpretation and Copenhagen Interpretation enjoy tons of support. In Many Worlds the answer is essentially: "Everything happens, thus every observation is explained, you just happen to be the observer observing this outcome". In Copenhagen it's equally lazy: "Nature is just random, there is no explanation". How are these any less anti-scientific than Superdeterminism?
 
  • Skeptical
  • Like
Likes physika and PeroK
  • #45
Quantumental said:
I have never understood why there is such a visceral oppositional response to superdeterminism while the Many Worlds Interpretation and Copenhagen Interpretation enjoy tons of support. In Many Worlds the answer is essentially: "Everything happens, thus every observation is explained, you just happen to be the observer observing this outcome". In Copenhagen it's equally lazy: "Nature is just random, there is no explanation". How are these any less anti-scientific than Superdeterminism?
I think that all three somewhat break with simple ideas about how science is done and what it tells us but to a different degree.

Copenhagen says that there are limits to what we can know because we need to split the world into the observer and the observed. The MWI does away with unique outcomes. Superdeterminism does away with the idea that we can limit the influence of external degrees of freedom on the outcome of our experiment. These might turn out to be different sides of the same coin but taken at face value, the third seems like the most radical departure from the scientific method to me.

There's also the point that unlike Copenhagen and the MWI, superdeterminism is not an interpretation of an existing theory. It is a property of a more fundamental theory which doesn't make any predictions to test it (NB: Sabine Hossenfelder disagrees) because it doesn't even exist yet.
 
  • Like
Likes Lord Jestocost and DrChinese
  • #46
Lord Jestocost said:
Why is "superdeterminism" not taken seriously. As Shimony, Clauser and Horne put it:

"In any scientific experiment in which two or more variables are supposed to be randomly selected, one can always conjecture that some factor in the overlap of the backwards light cones has controlled the presumably random choices. But, we maintain, skepticism of this sort will essentially dismiss all results of scientific experimentation.
I don't want to defend superdeterminism, but I have trouble seeing the difference to placebo effects in controlled medical studies, where it is randomly determined which patient should receive which treatment. Of course, we don't tell the individual patients which treatment they got. But we would naively not expect that it has an influence whether the experimenter (i.e. the doctor) knows which patient received which treatment. But apparently it has some influence.

I have the impression that the main difference to the Bell test is that for placebo effects in medicine we can make experiments to check whether such an influence is present. But the skepticism itself that such an influence might be present in the first place doesn't seem to me very different. (And if the conclusion is that non-actionable skepticism will lead us nowhere, then I can accept this. But that is not how Shimony, Clauser and Horne have put it.)
 
  • #47
gentzen said:
But we would naively not expect that it has an influence whether the experimenter (i.e. the doctor) knows which patient received which treatment. But apparently it has some influence.
Whether this is true or not (that question really belongs in the medical forum for discussion, not here), this is not the placebo effect. The placebo effect is where patients who get the placebo (but don't know it--and in double-blind trials the doctors don't know either) still experience a therapeutic effect.

I don't see how this has an analogue in physics.
 
  • #48
In any scientific experiment in which two or more variables are supposed to be randomly selected
gentzen said:
but I have trouble seeing the difference to placebo effects in controlled medical studies
PeterDonis said:
I don't see how this has an analogue in physics.
I am not suggesting an analogue in physics, I only have trouble to "see" where that scenario occurring in controlled medical studies was excluded in what Shimony, Clauser and Horne wrote.

I also tried to guess what they actually meant, namely that if there is nothing you can do to check whether your skepticism was justified, then it just stifles scientific progress.
 
  • #49
gentzen said:
I only have trouble to "see" where that scenario occurring in controlled medical studies was excluded in what Shimony, Clauser and Horne wrote.
I don't see how the two have anything to do with each other.

gentzen said:
I also tried to guess what they actually meant
Consider the following experimental setup:

I have a source that produces two entangled photons at point A. The two photons go off in opposite directions to points B and C, where their polarizations are measured. Points B and C are each one light-minute away from point A.

At each polarization measurement, B and C, the angle of the polarization measurement is chosen 1 second before the photon arrives, based on random bits of information acquired from incoming light from a quasar roughly a billion light-years away that lies in the opposite direction from the photon source at A.

A rough diagram of the setup is below:

Quasar B -- (1B LY) -- B -- (1 LM) -- A -- (1 LM) -- C -- (1B LY) -- Quasar C

In this setup, any violation of statistical independence between the angles of the polarizations and the results of the individual measurements (not the correlations between the measurements, those will be as predicted by QM, but the statistics of each measurement taken separately) would have to be due to some kind of pre-existing correlation between the photon source at A and the distant quasars at both B and C. This is the sort of thing that superdeterminism has to claim must exist.
 
  • Like
Likes DrChinese and Lord Jestocost
  • #50
kith said:
I think that all three somewhat break with simple ideas about how science is done and what it tells us but to a different degree.

Copenhagen says that there are limits to what we can know because we need to split the world into the observer and the observed. The MWI does away with unique outcomes. Superdeterminism does away with the idea that we can limit the influence of external degrees of freedom on the outcome of our experiment. These might turn out to be different sides of the same coin but taken at face value, the third seems like the most radical departure from the scientific method to me.

There's also the point that unlike Copenhagen and the MWI, superdeterminism is not an interpretation of an existing theory. It is a property of a more fundamental theory which doesn't make any predictions to test it (NB: Sabine Hossenfelder disagrees) because it doesn't even exist yet.

I disagree. Copenhagen's "there are limits to what we can know about reality, quantum theory is the limit we can probe" is no different from "reality is made up of more than quantum theory" which SD implies. It's semantics. As for MWI, yes, but by doing away with unique outcomes (at least in the most popular readings of Everettian QM) you literally state: "The theory only makes sense if you happen to be part of the wavefunction where the theory makes sense, but you are also part of maverick branches where QM is invalidated, thus nothing can really be validated as it were pre-QM". I would argue that this stance is equally radical in its departure from what we consider the scientific method to be.
 
  • #51
PeterDonis said:
Consider the following experimental setup:

I have a source that produces two entangled photons at point A. The two photons go off in opposite directions to points B and C, where their polarizations are measured. Points B and C are each one light-minute away from point A.

At each polarization measurement, B and C, the angle of the polarization measurement is chosen 1 second before the photon arrives, based on random bits of information acquired from incoming light from a quasar roughly a billion light-years away that lies in the opposite direction from the photon source at A.

A rough diagram of the setup is below:

Quasar B -- (1B LY) -- B -- (1 LM) -- A -- (1 LM) -- C -- (1B LY) -- Quasar C

In this setup, any violation of statistical independence between the angles of the polarizations and the results of the individual measurements (not the correlations between the measurements, those will be as predicted by QM, but the statistics of each measurement taken separately) would have to be due to some kind of pre-existing correlation between the photon source at A and the distant quasars at both B and C. This is the sort of thing that superdeterminism has to claim must exist.
Exactly. On p. 3 of Superdeterminism: A Guide for the Perplexed, Sabine writes:
What does it mean to violate Statistical Independence? It means that fundamentally everything in the universe is connected with everything else, if subtly so.
 
  • #52
PeterDonis said:
In this setup, any violation of statistical independence between the angles of the polarizations and the results of the individual measurements (...) would have to be due to some kind of pre-existing correlation between the photon source at A and the distant quasars at both B and C.
Can you prove this? Is this not similar to claiming that in the scenario occurring in controlled medical studies, any correlations would be due to correlations in the sources of randomness and the selected patient? You are basically postulating a mechanism for how the violation of statistical independence could have happened, and then point out that this mechanism would be implausible. Of course it is implausible, but the task would have been to prove that such an implausible mechanism is the only way how violation of statistical independence can arise.
 
  • Like
Likes physika
  • #53
Lord Jestocost said:
Then the proponents of “superdeterminism” should tackle the radioactive decay.
I don't study SD, so I don't know what they're looking for exactly. Maybe they can't control sample preparation well enough for radioactive decay? Just guessing, because that seems like an obvious place to look.
 
  • #54
gentzen said:
Can you prove this?
Isn't it obvious? We are talking about a lack of statistical independence between a photon source at A and light sources in two quasars, each a billion light-years from A in opposite directions. How else could that possibly be except by some kind of pre-existing correlation?

gentzen said:
Is this not similar to claiming that in the scenario occurring in controlled medical studies, any correlations would be due to correlations in the sources of randomness and the selected patient?
Sort of. Correlations between the "sources of randomness" and the patient would be similar, yes. (And if we wanted to be extra, extra sure to eliminate such correlations, we could use light from quasars a billion light-years away as the source of randomness for medical experiments, just as was done in the scenario I described.) But that has nothing to do with the placebo effect, and is not what I understood you to be talking about before.
 
  • #55
Quantumental said:
I disagree. Copenhagen's "there are limits to what we can know about reality, quantum theory is the limit we can probe" is no different from "reality is made up of more than quantum theory" which SD implies. It's semantics.
Maybe. But with this notion of semantics the question is: what topic regarding interpretations and foundations of established physics isn't semantics?

Quantumental said:
As for MWI, yes, but by doing away with unique outcomes (at least in the most popular readings of Everettian QM) you literally state: "The theory only makes sense if you happen to be part of the wavefunction where the theory makes sense, but you are also part of maverick branches where QM is invalidated, thus nothing can really be validated as it were pre-QM". I would argue that this stance is equally radical in its departure from what we consider the scientific method to be.
I find it difficult to discuss this because there are problems with the role of probabilities in the MWI which may or may not be solved. For now, I would say that I don't see the difference between my experience of the past right now belonging to a maverick branch vs. a single world view where by pure chance, I and everybody else got an enormous amount of untypical data which lead us to the inference of wrong laws of physics.
 
  • #56
Nullstein said:
"Quantum nonlocality" is just another word for Bell-violating, which is of course accepted, because it has been demonstrated experimentally and nobody has denied that. The open question is the mechanism and superdeterminism is one way to address it.
Right, as Sabine points out in her video:
Hidden Variables + Locality + Statistical Independence = Obeys Bell inequality
Once you've committed to explaining entanglement with a mechanism (HV), you're stuck with violating at least one of Locality or Statistical Independence.
 
  • Like
Likes Lord Jestocost and DrChinese
  • #57
gentzen said:
I also tried to guess what they actually meant, namely that if there is nothing you can do to check whether your skepticism was justified, then it just stifles scientific progress.
Indeed, I would say it violates the principles of sound inference which is presumably the foundations of the scientific process. Without evidence, it seems the natural and sound assumption is to not assume dependence. To assume and invest representing existence of an unknown causal mechanism, seems deeply irrational to me. It is similar to the fallacy one comitts to when postulating and ad hoc microstructure with an ergodic hypothesis. Extrinsic ad hoc elements corrupt the natural inference process, and thus explanatory value. Superdeterminism is IMO such an extrinsic ad hoc element. I wouldn't say it's "wrong", I just find it an irrational thought from the perspective of learning, and the scientific progress is about learning - but in a "rational way".

/Fredrik
 
  • Like
Likes gentzen
  • #58
Demystifier said:
How does non-locality require fine tuning?
In order to reproduce QM using non-locality, you don't just need non-locality. You need to tune the equations in such a way that they obey the no communication property, i.e. it must be impossible to use the non-local effects to transmit information. That's very much akin to the no free will requirement in superdeterminism. As can be seen in this paper, such a non-local explanation of entanglement requires even more fine tuning than superdeterminism.
Demystifier said:
How is non-locality anthropocentric?
Because communication is an anthropocentric notion. If the theory needs to be fine tuned to make communication impossible, this is thus a very anthropocentric requirement.
 
  • Skeptical
  • Like
Likes PeroK and gentzen
  • #59
RUTA said:
Once you've committed to explaining entanglement with a mechanism (HV), you're stuck with violating at least one of Locality or Statistical Independence.
If you insist that by mechanism, we mean an explanation in terms of hidden variables. I rather except that we need to refine our notion of what constitutes a mechanism. Before Einstein, people already knew the Lorentz transformations but they were stuck believing in an ether explanation. Even today, there are some exots left who can't wrap their heads around the modern interpretation. I think with quantum theory, we are in a quite similar situation. We know the correct formulas, but we are stuck in classical thinking. I guess we need another Einstein to sort this out.
 
  • Like
Likes Fra and martinbn
  • #60
Nullstein said:
You need to tune the equations in such a way that they obey the no communication property, i.e. it must be impossible to use the non-local effects to transmit information.
For me, it is like a 19th century critique of atomic explanation of thermodynamics based on the idea that it requires fine tuning of the rules of atomic motion to fulfill the 2nd law (that entropy of the full system cannot decrease) and the 3rd law (that you cannot reach the state of zero entropy). And yet, as Boltzmann understood back than and many physics students understand today, those laws are FAPP laws that do not require fine tuning at all.

Indeed, in Bohmian mechanics it is well understood why nonlocality does not allow (in the FAPP sense) communication faster than light. It is a consequence of quantum equilibrium, for more details see e.g. https://arxiv.org/abs/quant-ph/0308039
 
  • Like
Likes PeroK
  • #61
Nullstein said:
If you insist that by mechanism, we mean an explanation in terms of hidden variables. I rather except that we need to refine our notion of what constitutes a mechanism. Before Einstein, people already knew the Lorentz transformations but they were stuck believing in an ether explanation. Even today, there are some exots left who can't wrap their heads around the modern interpretation. I think with quantum theory, we are in a quite similar situation. We know the correct formulas, but we are stuck in classical thinking. I guess we need another Einstein to sort this out.
Yes, and even Einstein tried "mechanisms" aka "constructive efforts" to explain time dilation and length contraction before he gave up and went with his principle explanation (relativity principle and light postulate). Today, most physicists are content with that principle explanation without a constructive counterpart (no "mechanisms", see this article), i.e., what you call "the modern interpretation." So, if you're happy with that principle account of SR, you should be happy with our principle account of QM (relativity principle and "Planck postulate"). All of that is explained in "No Preferred Reference Frame at the Foundation of Quantum Mechanics". Here is my APS March Meeting talk on that paper (only 4 min 13 sec long) if you don't want to read the paper.
 
  • #62
Nullstein said:
I rather except that we need to refine our notion of what constitutes a mechanism.'
I agree that this is a major insight. The notion of "mechanism" in current paradigm heavily relies on state spaces and timeless laws, ie equation based models. In models of interacting parts that are learning, I think the "causation" is different, it is essentially the rules of learning and organisation of the parts. This thinking it totally absent in when one talkes abouto the bell inequality. It requires a different paradigm I think.

/Fredrik
 
  • Like
Likes gentzen
  • #63
Demystifier said:
For me, it is like a 19th century critique of atomic explanation of thermodynamics based on the idea that it requires fine tuning of the rules of atomic motion to fulfill the 2nd law (that entropy of the full system cannot decrease) and the 3rd law (that you cannot reach the state of zero entropy). And yet, as Boltzmann understood back than and many physics students understand today, those laws are FAPP laws that do not require fine tuning at all.
No, that's very different. Boltzmann's theory had measurable implications. He did not claim that nature conspires to hide atoms from humans even in principle. If we look close enough, we can observe their effects and make use of them, even though that may not have been technologically possible back in Boltzmann's days. In stark contrast, any theory that reproduces QM, must obey the no communication theorem and prohibit the observation and the use of those non-local effects forever.
Demystifier said:
Indeed, in Bohmian mechanics it is well understood why nonlocality does not allow (in the FAPP sense) communication faster than light. It is a consequence of quantum equilibrium, for more details see e.g. https://arxiv.org/abs/quant-ph/0308039
It doesn't matter though, whether it is well understood. That doesn't make it any less fine tuned.
 
  • #64
RUTA said:
Yes, and even Einstein tried "mechanisms" aka "constructive efforts" to explain time dilation and length contraction before he gave up and went with his principle explanation (relativity principle and light postulate). Today, most physicists are content with that principle explanation without a constructive counterpart (no "mechanisms", see this article), i.e., what you call "the modern interpretation."
I wouldn't consider modern special relativity less constructive than classical mechanics. SR replaces the Galilei symmetry by Poincare symmetry. There is no reason why Galilei symmetry should be preferred, nature just made a different choice. We don't get around postulating some basic principles in any case.
RUTA said:
So, if you're happy with that principle account of SR, you should be happy with our principle account of QM (relativity principle and "Planck postulate"). All of that is explained in "No Preferred Reference Frame at the Foundation of Quantum Mechanics". Here is my APS March Meeting talk on that paper (only 4 min 13 sec long) if you don't want to read the paper.
For me, a satisfactory explanation of QM would have to explain, why we have to use this weird Hilbert space formalism and non-commutative algebras in the first place. Sure, we have learned to perform calculations with it, but at the moment, it's not something a sane person would come up with if decades of weird experimental results hadn't forced them to. I hope someone figures this out in my lifetime.
 
  • #65
Nullstein said:
I wouldn't consider modern special relativity less constructive than classical mechanics. SR replaces the Galilei symmetry by Poincare symmetry. There is no reason why Galilei symmetry should be preferred, nature just made a different choice. We don't get around postulating some basic principles in any case.

For me, a satisfactory explanation of QM would have to explain, why we have to use this weird Hilbert space formalism and non-commutative algebras in the first place. Sure, we have learned to perform calculations with it, but at the moment, it's not something a sane person would come up with if decades of weird experimental results hadn't forced them to. I hope someone figures this out in my lifetime.
SR is a principle theory, per Einstein. Did you read the Mainwood article with Einstein quotes?

And, yes, the information-theoretic reconstructions of QM, whence the relativity principle applied to the invariant measurement of h, give you the Hilbert space formalism with its non-commutative algebra. So, it's figured out in principle fashion. Did you read our paper?
 
  • #66
RUTA said:
SR is a principle theory, per Einstein. Did you read the Mainwood article with Einstein quotes?
Yes, I read the article, but I don't agree with it. I don't agree that there is such a distinction between principle theories and a constructive theories. Every theory must be based on some fundamental axioms. SR just has different ones than classical mechanics.
RUTA said:
And, yes, the information-theoretic reconstructions of QM, whence the relativity principle applied to the invariant measurement of h, give you the Hilbert space formalism with its non-commutative algebra. So, it's figured out in principle fashion. Did you read our paper?
I watched the video, but then I saw ket vectors suddenly appearing from one slide to the next. Also, it was only concerned with spin states and rotations, not with recovering the full QM formalism. I'm aware that there are people who try to derive QM from information theoretic principles, but personally, I'm not satisfied by the axioms that they came up with so far. To me, they are not more intuitive than the ones von Neumann came up with. Personally, I'm hoping for something far more intuitive.
 
  • #67
Nullstein said:
Demystifier said:
How is non-locality anthropocentric?
Because communication is an anthropocentric notion. If the theory needs to be fine tuned to make communication impossible, this is thus a very anthropocentric requirement.
That is an impressively nice thought. Much nicer and deeper than your later elaboration:
Nullstein said:
And that makes it anthropocentric, because the Born rule captures everything that is in principle accessible to humans while it hides everything that is inaccessible to humans in principle (such as details about hidden variables).

A "strange" discussion on "true randomness" on the FOM list just ended, with conclusions like:
Alex Galicki said:
It seems that in the current discussion on "true randomness", there is so much confusion that it would take way too much time for anyone to clear that up in a reasonable amount of time.
...
C) One simple but important idea from Algorithmic Randomness is that randomness is always relative, there is no "true randomness".
My own attempted definition of "a truly random phenomenon" (in the context of gambling and games, anthropocentric and relative) also occurred in that discussion, contrasted to quantum randomness:
  • Quantum mechanics: The randomness itself is nonlocal, and it must be really random, because otherwise this non-locality could be used for instantaneous signal transmission.
  • Gambling and games: A truly random phenomenon must produce outcomes that are completely unpredictable. And not just unpredictable for you and me, but unpredictable for both my opponents and proponents.
The nice thing about your thought is that quantum randomness should better not be anthropocentric and relative. Or maybe it should, because "true randomness" is. This could point towards ideas like Gell-Mann's and Hartle's "information gathering and utilizing systems (IGUSs)" or Rovelli's relational quantum mechanics.
 
Last edited:
  • Like
Likes Fra
  • #68
gentzen said:
One simple but important idea from Algorithmic Randomness is that randomness is always relative, there is no "true randomness"
This is a key insight IMO. Even though it is a big mess, the conceptual value of the insight is obvious I think only once you start considering the structure and action, ande the rules of interacting "IGUS" / agents / observers. This is where I want to dig further.

This has many relations to unification issues. For example, is Hawking raditation random? Is a big black hole "more random" than a microscopic one? If so, why?

/Fredrik
 
  • #69
Nullstein said:
It doesn't matter though, whether it is well understood. That doesn't make it any less fine tuned.
So where exactly is fine tuning in the Bohmian theory?
 
  • Like
Likes PeroK
  • #70
Nullstein said:
As can be seen in this paper, such a non-local explanation of entanglement requires even more fine tuning than superdeterminism.
I can't find this claim in the paper. Where exactly does the paper say that?
 
  • Like
Likes DrChinese

Similar threads

  • Quantum Interpretations and Foundations
2
Replies
41
Views
5K
  • Quantum Interpretations and Foundations
4
Replies
105
Views
4K
  • Quantum Interpretations and Foundations
3
Replies
97
Views
6K
  • Quantum Interpretations and Foundations
Replies
31
Views
2K
Replies
3
Views
1K
  • Quantum Physics
Replies
4
Views
680
Replies
21
Views
2K
  • Quantum Interpretations and Foundations
Replies
4
Views
1K
  • Quantum Physics
Replies
13
Views
707
  • Quantum Interpretations and Foundations
2
Replies
37
Views
2K
Back
Top