Fra said:
This "memory effecty" is simply hard to model with system dynamics in a intuitive way - it unavoidably "shows up" as weird non-local stuff, but when it's in fact not necessarily non-locally mediated.
To understand the difference between "behaviour" and pre-correlated output, I find the agent based model conceptually superior to system dynamics, because instead of modelling the whole happening as a deterministic evolution of a state as per a fixed dynamical law; we can interacting parts that in principle are autonomous - but can become correlated in their behaviour - because they share a history.
I am not familiar with your agent-based model to understand how this would work; e.g., how would the autonomy work? But I actually think the "memory effect" can be given a
reasonably intuitive description, at least from my interpretational perspective.
I believe the "memory effect" is described by these papers by
Wharton et al. and
Sutherland examining spin weak values that carry information about the intermediate time before a final spin measurement. These spin weak values (both real and imaginary parts) are constant through intermediate times and only depend on the spin directions at initial and final times, which each fix a spin component of the weak value:
"
For each possible result, the smallest vector that conforms to both of these constraints, without changing between measurements, is precisely Re(w±)" (Wharton et al.).
The spin weak value calculations in
Berry (2011) [section 2] also gives a picture of constant spin values only constrained by pre- and post-selected spin directions. So this seems to be the memory effect - a constant spin vector between initial preparation and final measurement. But the problem is that it kind of looks like retrocausal influence from measurement settings to intermediate times.
But we can look at what they are saying directly from a stochastic mechanical perspective because stochastic mechanical current and osmotic velocities are identical to the real and imaginary momentum weak values, respectively, including for spin. Current velocities (the real part) are constructed using expectations from ensembles of particle trajectories, as explicitly depicted by
de Matos et al. (2020) here:
https://images.app.goo.gl/cC2oj
Osmotic velocities (imaginary part) describe the tendency of particles to climb the probability gradient. The expectations of the quantum mechanical momentum and spin operators directly correspond to expectations of stochastic mechanical current velocities (e.g. de Matos, 2020) [section 4.3] - i.e. weak values in the orthodox formalism:
Hiley (2012) (6th equation);
Hosoya & Shikano (2010) [equations (3)].
In the orthodox picture, spin is a property of an individual particle which then changes at the point of measurement, which is difficult to reconcile in a locally realistic manner. But in stochastic mechanics, spin clearly cannot be identified with any individual particle; it is only meaningful
statistically on the level of ensembles of particle trajectories. This opens up the possibility that different measurement orientations just partition whole (counterfactual) ensembles of intermediate trajectories, associated with an initially prepared spin, in different ways that have different spin statistics in accordance with the current and osmotic spin velocity fields. If you think about it, what Malus' Law cos^2
θ when applied to photons is telling you is just how particles will be distributed across two ensembles in terms of relative frequencies / probabilities. Stochastic mechanics is then just saying that the related spin or polarization directions are to do with the statistics associated with each ensemble as a whole.
In the Wharton et al. qubit example you also see that:
"
the weighted average of w+ and w− (using their Born-rule probabilities) is exactly (0, 0, 1), with no imaginary part surviving. This average matches ˆi [the initial spin]".
This description seems to be related to how before I said quantum mechanical spin expectations can be described as expectations of weak values, Wharton et al. earlier mentioning this themselves in the paper:
"
Nevertheless, in the usual situation where the result of the final measurement is unknown and a weighted average is taken over the possible outcomes, the weak value Re(W[A])(t) can then be shown to be exactly equal to the usual expectation value ⟨A⟩(t)".
From the stochastic mechanical view, the quotes are then saying that the initially prepared spin statistics are related to the statistics of its post-selected components by just a very conventional expectation. That seems to me exactly what you would expect if the final spin outcomes just came from partitioning an ensemble of intermediate trajectories into different subsets which each have different statistics - like any other kind of post-selection in statistics. Its then difficult for me to understand from this perspective why some extra weirdness like retrocausality would be needed (I don't think retrocausality even makes any sense to be honest).
Because any sub-ensemble's statistics at preparation are "remembered" up to final measurement, then in an entanglement scenario you can get the Bell state correlations by having a perfect correlation locally fixed between any and all sub-ensembles of entangled pairs. Obviously, the correlation can actually only be physically, methodologically imposed on particles one pair at a time that go through the experimental set up; experimental repetition then would build up (sub-)ensembles with the appropriate statistics.