PeterDonis said:
The issue with the minimal interpretation is that there is no rule that tells you when a measurement occurs. In practice the rule is that you treat measurements as having occurred whenever you have to to match the data. So in your example, since nobody actually observes observers to be in superpositions of pointer states, and observers always observe definite results, in practice we always treat measurements as having occurred by the time an observer observes a result.
stevendaryl said:
So, the minimal interpretation ultimately gives a preference to macroscopic quantities over other variables, but this preference is obfuscated by the use of the word "measurement". The inconsistency is that if you treat the macroscopic system as a huge quantum mechanical system, then no measurement will have taken place at all. The macroscopic system (plus the environment, and maybe the rest of the universe) will not evolve into a definite pointer state.
Is all of this not just a problem related to QM being a probabilistic theory?
For example if I model a classical system such as a gas using statistical methods like Liouville evolution, I start with an initial state on the system ##\rho_0## and it evolves into a later one ##\rho_t##. Nothing in the formalism will tell me when a measurement occurs to allow me to reduce ##\rho## to a tighter state with smaller support (i.e. Bayesian updating). Just as nothing in a probabilistic model of a dice will tell me when to reduce the uniform distribution over outcomes down to a single outcome. Nothing says what a measurement is.
Similarly one could zoom out to a larger agent, who uses a distribution not only over the gas from the first example but also over the state space of the device used to measure it (staying within classical mechanics for now). His ##P## distribution will evolve under Liouville's equation to involve multiple detection states for the device, in contrast to my case where the device lies outside the probability model and is used to learn of an outcome.
Any probability model contains the notion of an "agent" who "measures/learns" the value of something. These ideas are primitives unexplained in probability theory (i.e. what "causes" Bayesian updating). Any "zoomed out" agent placing my devices within their probability model will not consider them to have an outcome when I do until they themselves "look".
So to me all of this is replicated in Classical probability models. It's not a privileged notion of macroscopic systems, but unexplained primitive notions of "agent" and "learning/updating" common to all probability models. Introduce an epistemic limit in the Classical case and it becomes even more similar to QM with non-commutativity, no cloning of pure states, superdense coding, entanglement monogamy, Wigner's friend being mathematically identical to the quantum case, etc
The major difference between QM and a classical probability model is the fact that any mixed state has a purification on a larger system, i.e. less than maximal knowledge of a system ##A## can always be seen as being induced by maximal knowledge on a larger system ##B## containing ##A## (D'Ariano, Chiribella, Perinotti axioms). This is essentially what is occurring in Wigner's friend. Wigner has a mixed state for his friend's experimental device because he has maximal possible knowledge (a pure state) for the Lab as a whole. The friend does not track the lab as a whole and thus he can have maximal knowledge (a pure state) for the device.
So as long as QM is based on probability theory viewed in the usual way you will always have these odd notions of "when does a measurement occur/when do I update my probabilities" and "I consider event ##A## to have occured, but somebody else might not". You could say this is a discomfort from having probability as a fundamental notion in your theory.
If one wishes a way out of this would be
@A. Neumaier 's view where he reads the formalism differently and not in the conventional statistical manner.