This is just my thoughts on the new Barandes papers.
Regarding Barandes' Pilot-Wave-As-Hidden-Markov-Models paper, I feel like Barandes' use of the phrase "hidden markov model" seems idiosyncratic. I then don't see the rhetorical pull of calling the wave-function a "hidden markov model" because if you're effectively redefining it to suit your purposes then it seems like question-begging (in philosophy / logic terminology). A HMM is a model, but if HMMs and latent variables are used as attempts to model real underlying structure in data or real things in the world then I don't understand his equating these things with "parlour tricks" purely in virtue of the label "HMM". I think saying that something is
just a HMM is almost like saying that physics is
just differential equations or something similar. Thats like conflating what is meant to be a description of the real world with the class of formal objects required to represent the world in that description.
Its not entirely obvious to me that describing a pilot wave in terms of a HMM actually refutes its ontological role either (even if it looks weird) but even if we assume that the ontological role is held by something else (e.g. the random variable configuration / sample space), I don't see how calling the pilot wave a HMM refutes it having a nomological role either because the stochastic-quantum correspondence arguably allows you to see the wavefunction as an equally valid way of re-formulating information about non-Markovian behavior. If science generally allows formulating the same behavior using different but ostensibly equivalent frameworks then how do you decide a preference? Obviously, describing observable probabilistic predictions always requires the stochastic-side of the correspondence; but reading the following criticism by another author suggests to me that the complex-number formulation is still a necessary, central part of quantum theory in the sense that it does the bulk of what you want a good scientific model to do and so isn't made anywhere near redundant at all by the stochastic description: (arXiv:2601.18720v1)
"The physicist’s task is normally to solve problems about how objects move (or more generally, how systems evolve with time). For this, one starts by positing the system’s underlying forces (or potentials). With that, one normally gets a differential equation that is to be solved, either analytically, approximately or numerically. Crucially, one does not have the (exact) evolution of the system, but just its underlying driving forces. In the stochastic approach, however, the Γ matrix already possesses all the information about the system’s evolution (for example, the particle’s trajectory), albeit possibly only stochastically and indivisibly. This means that to get the system’s stochastic characterisation is to have completely solved the problem. But the issue here is that we have no way of knowing what the Γ matrix is unless we first solve the physical problem in a traditional way! Therefore, the stochastic side of things doesn’t have any practical significance, in the sense of helping solve a physical problem."
Just presenting correct observable data is not enough to be equated with a useful, informative model of the world. It seems to me that if you don't have a way of specifying the transition matrices without solving the quantum-side of the correspondence first, then much of those qualities that enables a useable scientific theory with nomological properties (as opposed to just effectively raw, time-dependent data) is still largely captured by the quantum-side of the stochastic correspondence as things currently stand. Someone can argue that the quantum state or pilot-wave's nomological status is protected or even more fundamental because you still need it to formulate the content of quantum theory. And this is not necessarily incompatible with giving the quantum state or a pilot wave the label of "HMM" as a class of model that both represents something metaphysically real albeit latent, and generates predictions about observable data.
Arguably any argument about weirdness or complicatedness of the complex-number formulation has little strength behind it as far as nomological laws are concerned if the stochastic-side is not a self-contained alternative to that quantum-side. "But the stochastic-side describes all the solutions!"; yeah, but if it is practically useless then its not worth calling it a model in the same way that just giving someone empirical data in a specific situation is not the same as giving someone a useable model with nomological content. And seems quite clear that the indivisible probabilities are identical to quantum' theory's empirical content. The non-Markovian transition matrices describe exactly predictions about a one-shot measurement and indivisibility itself corresponds directly to the fact that
if you had performed an intermediate-time measurement between the initial preparation and the aforementioned "one-shot" measurement, then the statistics will be disturbed. Personally, I don't think these probabilities should be viewed as more than epistemic with regard to what measurements will tell you, because imo the idea that you can have a stochastic description of reality where there are definite particle trajectories, but the fundamental laws literally provide no fact of the matter about those trajectories, is incoherent. Either the definite trajectories shouldn't exist, which gives you something like the orthodox picture, or you need to allow for there to be some even more fundamental description which gives nature a way to compose these definite trajectories. The latter point of view then opens the door to a pilot-wave perspective. If in principle measurements are the only way we can actually know about trajectories, measurement disturbance seems a sufficient way of explaining why the transition probabilities are non-Markovian and can't give you that trajectory informationin a way that is statistically compatible with the one-shot measurement probabilities.
I believe the last point is explicitly shown in this paper about the Kirkwood-Dirac (KD distribution): arXiv:0705.0229v1 where (equation 11b) describes indivisible probabilities from the perspective of the pre-measurement state ρ [i.e. P(b | ρ) ≠ ∑
a P(b | a)P(a | ρ)] but also describes Barandes' manifestly divisible probabilities due to a division event (i.e. 11b corresponds to equations (53) - (56) of Barandes arXiv:2302.10778v3), when seen from the perspective of the post-measurement state (i.e. 23 is KD representation for post-measurement state ρ' in 0705.0229v1). The entire quasi-distribution described here is operationally meaningful in terms of statistics for consecutive projective measurement where additional complex modification terms describe how the statistics for measurements on an initial state would be disturbed by an intermediate measurement (i.e. in 14 and 20). These are responsible for density matrix off-diagonals and disappear similarly when a measurement is performed (i.e. Barandes division event) where we move from pre- to the post-measurement mixed state (i.e. from 12 to 23). Effectively, from the post-measurement state's perspective you have ρ = ρ' which is why the disturbance terms vanish after a measurement is performed and leave a classical (i.e. divisible) joint probability distribution (i.e. Measurements of A will always disturb ρ but do not disturb the mixed state ρ' so trying to construct joint probabilities gives classically incorrect marginalizations when starting from ρ as opposed to ρ'). Also note in this paper, (24) is virtually identical to Barandes' description of collapse. Basically from the Kirkwood-Dirac quasi-distribution in arXiv:0705.0229v1 you see all of the kind of indivisible content in Barandes' papers described explicitly in terms of measurement disturbance; I guess one point of difference I see is that the measurements in 0705.0229v1 are obviously explicitly in different measurement bases, but what is described still seems more-or-less the same as the Barandes stochastic process structure just without recognizing what is happening between measurements.
[Edit (here and in above paragraph): I forgot to note that in the above paragraph, the disturbance terms clearly describe why the joint probabilities for consecutive measurements don't marginalize - i.e. why P(b | ρ) ≠ ∑
a P(b | a)P(a | ρ). The KD distribution "complex modification" / disturbance terms explicitly encodes information about indivisibility of consecutive measurements]
Neither do I think the gauge-transformation argument he presents is a strong argument against a nomological view of a pilotwave. Clearly, it implies thete are multiple possible consistent descriptions of Bohmian trajectories in a given scenario; but its not obviously a problem to me that a scenario is compatible with multiple different underlying physical descriptions under the same nomological content especially if there are limits.on what can be observed. I would say a good model would not unnecessarily specify a description if such redundancy exists.
I think its probably also worth noting, relevant to the bottom of section 4.3 and perhaps his other weak value paper, that post-selected weak values are identical to conditional expectations of the kirkwood-dirac quasi-distribution (also described just above) which gives an informationally complete representation of the quantum state (the post-selection probabilities are actually an inherent part of the KD representation so the role of the post-selection is not strictly just a methodological artefact, albeit obviously people may still use the post-selection in an illegitimate way as Barandes suggests). Weak measurements can then be seen as ways of directly measuring the quantum state, even if you don't give the quantum state a physical, ontological interpretation (e.g.
https://doi.org/10.1088/1367-2630/ada05d).
My last criticism of his ABL rule paper is that even though he may be right that the measurement process isn't actually time-symmetric, I think the way Vaidman formulates it is, because you evolve forward from initial preparation and backward from the post-selection. If you switch the pre-and post-selection you are just swapping their roles in terms of which one is the backward and which one is the forward evolution. This is precisely analogous to the shopping analogy in section 3.4 of the ABL rule paper which Barandes uses to give what he thinks time-reversal should look like. Given what was said in last paragraph, you can see weak values and the ABL rule probabilities as giving two different quasi-distributions regarding the exact same scenario. You can then note weak values complex conjugate both under time-reversal and by just swapping the pre- and post-selections (e.g. arXiv:1801.04364v2; arXiv:quant-ph/0105101v2) which suggests that swapping the measurement order and reversing-time in a given scenarios are indisinguishable on the level of the formal description. Another big point of the original ABL paper that Barandes neglects but is important to the ABL time-symmetry is that, at least under the ABL description, the Born rule doesn't have a preferred direction of time. To produce the regular Born rule you perform a procedure that makes the intermediate-time measurement independent of the post-selection. If you perform an anlogous procedure to make the pre-selection independent then you get the Born rule backward-in-time, and so there does not seem to be an inherent time-direction that the Born rule works in when looking at this ABL picture. ABL then suggest that quantum theory is inherently time-symmetric, and any time-asymmetry is contingent and not an inherent feature if quantum theory but of how the universe happens to be distinct from what quantum theory fundamentally prescribes about the universe (e.g. asymmetry due to entropy gradient, knowledge or memories of the past, ability to make free choices about preparations). This asymmetry is what Barandes is identifying and so from ABL's perspective, Barandes is attackung a strawman.