Understanding Barandes' microscopic theory of causality

Click For Summary

Discussion Overview

This thread explores Barandes' microscopic theory of causality as presented in his pre-print "New Prospects for a Causally Local Formulation of Quantum Theory." The discussion focuses on the implications of Barandes' claims regarding causal locality in quantum mechanics, particularly in relation to Bell's theorem, and seeks to understand the interpretation of entanglement within this framework.

Discussion Character

  • Debate/contested
  • Exploratory
  • Technical explanation
  • Conceptual clarification

Main Points Raised

  • Some participants express skepticism about Barandes' assertion that his theory deflates Bell's theorem, questioning how he can claim a causally local hidden-variables formulation of quantum theory.
  • Barandes distinguishes between causal locality and Bell's local causality, which raises concerns about whether he is merely restating the no-signaling theorem.
  • There is a suggestion that Barandes' interpretation could lead to a fundamentally different understanding of the universe compared to general relativity.
  • One participant notes that Barandes does not translate "entanglement" into his new framework, implying that it remains an unresolved aspect of his theory.
  • Another participant proposes that Barandes' hidden variables differ from those in Bell's theorem, suggesting a violation of the assumption of "divisibility" into an objective beable.
  • Concerns are raised about the difficulty of explaining causal locality through a Bayesian network analogy as attempted by Barandes.
  • Some participants emphasize the need for an open-minded approach to understanding Barandes' principles rather than dismissing them outright.

Areas of Agreement / Disagreement

Participants generally do not reach consensus, with multiple competing views regarding the implications of Barandes' theory and its relationship to established concepts in quantum mechanics and relativity. The discussion remains unresolved on several key points, particularly concerning the interpretation of entanglement and the validity of Barandes' claims about causal locality.

Contextual Notes

Participants note limitations in understanding Barandes' framework, particularly regarding the translation of established quantum concepts into his proposed language. There are unresolved questions about the implications of his theory for existing interpretations of quantum mechanics and the foundational assumptions underlying Bell's theorem.

  • #391
Fra said:
For me the only conceptual resolution to this is the insight that the laws of nature is emergent. I see it like this: Its exactly because nature does not "know" that the natural resolution is stochastic progression, but one that is constrained by interaction history. Each subsystems "random walk" as it interact with other subsystems has a definite history, but this does not map onto an single objective beable state, like in system dynamics; this is where I see paradigm breakdown. Instead what reaplces the state space is more like an evolving "network" or contexts. A "trajectory" in this is a different animal than a "path" in a state space. This makes sense to me. But the problem is that I see "law" (ie differential equations) beening replaced by a kind of stochastic process/algorithm. So the arrow of time as defined by dynamical law, is conceptually replaced by a "learning" network. And they can be consistent if and only if the learning network reaches some steady state, where "effective" dynamical laws can be seem at moderate time scales.

This is conceptually fine for me, but the missing part is the algoritmic implementation, and proof that steady states that map onto the standard models "effective theories" - ie differential equations in effective state spaces - exists. This is more complicated, but conceptually more satisfactory than having a dynamical law that "just is" for no particular reasons. This is easier to accept if one realizes that even classical physics has mysteries. We are just "used to" those mysteries and accept them.
You might find this Curt Jaimungal episode interesting:



Fra said:
But this view is a plain descrpitive probability of all time history, which in itself has zero predictive value in the situation where you want to guess the near future from a limited sample. Descriptive statistics give no insight into causal mechanisms at all.

Well I think all probabilities bottom-out this way. Probabilities just won't be veridical or even meaningful, at least for physical systems, if they don't match approximately what would happen if you repeated scenarios infinitely many times. The Omniscient observer has just made a direct calculation of these probabilities in an objectivist / frequentist manner. From the perspective of another kind of observer standing at some point in time of the history of the specific scenario being codified, and subjectively not knowing what will happen next, these probabilities will still have predictive value.

At the same time, this descriptive kind of laws is the same kind of perspective Morbert was pointing at when he said Laws codify what we can say about it - Humean regularities in the behavior of the universe. Albeit, to some extent this codification kind of depends on what an observer happens to be able to see or cognitively process, and the omniscient observer can count stuff that ordinary people are unable to see. But the point is that from this perspective; if definite trajectories exist and they can be counted, then you can codify probabilistic laws about them in principle from which the indivisible probabilities emerge as a special case.
 
Physics news on Phys.org
  • #392
iste said:
Well I think all probabilities bottom-out this way. Probabilities just won't be veridical or even meaningful, at least for physical systems, if they don't match approximately what would happen if you repeated scenarios infinitely many times
All descriptive probabilities yes. Descriptive probabilities become normative only when combined with the stationarity assumption, that a limited samples is representative. This assumptions of course holds well for all normal subatomic physics. The timescale for repeating the preparation/detection and processing data is short, relative to the possible scale at which the systems behaviour change. This leads us to the convential paradigm, where we can always infer the statisical law and state space that are timeless, and we reduce time to a parameterisation.

But its exactly this way to constructing law, that completely obscures the nature of causality. This view on "law" also is deeply problematic when the stationarity assumption fails; for example adding cosmological or evolutionary perspectives. So current paradigm excels at time scales short, relative to the observing contexts "preparation and postprocessing" timescale.

But it has no explanatory value for causality, it is descriptive only. In the descriptive picture, there is only correlations, and they are defined via top-down constraints, not bottom-up. This is why it looks non-local and weird. But for this is an artifact of the paradigm, it's not nature that is weird, it is an artifact of our theory paradigm.

If we can success in first princuple construction of normative probabilities (Gamma in this case) via a limiting case of first principles (not via correspondnece to the statistical picture of QM), then we are making progress. Contemplating Barandes picture, I am expected this to come from a generalized stochastic proceess, that is NOT unistochastic, but that spontanously will auto-tune to a unistochastic one. If we can understand that process, we would not only get a "correspondence" but an alternative "reconstruction" of QM, that does NOT make use of fictive ensembles or stationary assumptions or unlimited information processing capacity in the context.

This is exactly how i view Barandes supplying a first handle, on a different picture. But he supplies the handle at the "stationary assumption" interface yes. But that can't be the final word!

iste said:
The Omniscient observer has just made a direct calculation of these probabilities in an objectivist / frequentist manner. From the perspective of another kind of observer standing at some point in time of the history of the specific scenario being codified, and subjectively not knowing what will happen next, these probabilities will still have predictive value.
The problem I have is that the this omniscent observer is a pure fiction. I cant see it has any value atall in trying to understand how the parts of nature are orchestrated; without external constraints.

/Fredrik
 
  • Like
Likes   Reactions: gentzen
  • #393
Fra said:
All descriptive probabilities yes.
No, all physically meaningful probabilities. If you have a probability describing a physical system that is not approximated frequentially when you repeat a scenario indefinitely, then that probability is wrong.

Fra said:
But its exactly this way to constructing law, that completely obscures the nature of causality.
Well it depends on your notion of causality or laws; but I am not proposing to construct a law in any specific way. Morbert was the one who said that laws are just a way of codifying what we see in terms of something like Humean regularities as opposed to some kind of metaphysical process that generates events we see in the universe.

Fra said:
The problem I have is that the this omniscent observer is a pure fiction. I cant see it has any value atall in trying to understand how the parts of nature are orchestrated; without external constraints.

The omniscient observer was just a device for talking about the idea that there are definite trajectories going on even when normal observers cannot see them. He would have access to the objective frequencies concerning these trajectories if experiments could be indefinitely repeated. If you think of laws or physical theories as codifying what you see, then definite trajectories that people cannot see are in principle codifiable. The fact that normal observers cannot see them is epistemic, not a metaphysical. My point was that its difficult for me to envision simultaneously that: 1) the indivisible stochastic transition probabilities reflect some irreducibly fundamental way of describing the microscopic world, and 2) there are definite trajectories. It seems to me that if 2) is the case, then 1) should be in principle reducible to a more fundamental description with a more general set of probabilities.

Fra said:
This view on "law" also is deeply problematic when the stationarity assumption fails; for example adding cosmological or evolutionary perspectives. So current paradigm excels at time scales short, relative to the observing contexts "preparation and postprocessing" timescale.
I don't think what you're saying has anything especially to do with probabilities or ways someone should or should not do science. Theories often fail in certain contexts or limits; thats virtually ubiquitious in science.

Fra said:
that does NOT make use of fictive ensembles or stationary assumptions or unlimited information processing capacity in the context.
In the specific context of this conversation, these are necessary corrollaries of definite trajectories that occur when no one is looking and can be given objective probabilities. This is just the general aim of anyone looking for a realistic "hidden variable" perspective, which is how Barandes and others sees his theory.
 
  • Like
Likes   Reactions: gentzen

Similar threads

  • · Replies 60 ·
3
Replies
60
Views
4K
  • · Replies 175 ·
6
Replies
175
Views
13K
  • · Replies 37 ·
2
Replies
37
Views
4K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 37 ·
2
Replies
37
Views
3K
  • · Replies 710 ·
24
Replies
710
Views
45K
Replies
119
Views
5K
  • · Replies 35 ·
2
Replies
35
Views
2K
  • · Replies 37 ·
2
Replies
37
Views
7K
Replies
44
Views
6K