I have to read up upon the discussions we had years ago, I appreciated your emphasis on the signalprocessing, but I don't remember where we diverged. Don't remember if I asked you about this before but...
Peter Morgan said:
that something about the idea of an experimenter and their various relationships with an experiment is "even self-evidently true", which is in stark contrast with almost anything that has ever been said about an observer. I suggest there's a kind of Shut-Up-And-Calculate simplicity to the idea of an experimenter that there is not for the idea of an observer.
...
I suppose most working physicists don't often ask what an observer is&does, whereas they are almost always thinking about experiments and how to make their favorite experiment happen.
Have you tried to put your ideas in the context of predictive autoencoders? It seems to me it would be a nice mate with the idea of processing of "datasets". Ie. the predictive autoencoder (which is is tempting to call "observers") registers input, compresses data to respect memory constraints, retains the presumed interesting patterns. And from this compressed code predictions of the future follow (via some model; and pinpointing this model is essentially the problem of finding the hamiltonian).
Normal autoencodres is commong technology in digital soundprocessing, predictive encoders does not compress and reconstruct, but it compresses and and predicts with future stats in mind.
This would get us away from fictive "ensembles". Instead one predicts future sequence from a specific history. So no ensembles needed. This would make sense both for smaller "observers" and experimental human context. The difference is the memory and computational capacity the "autoencoder" has.
QM as it stands is formulates as if memory and processing is never limiting.
But "what if" the explanation to why nature entertains non-commutative encodings, lies precisely in that it is more efficient. Then we will not see it until we acknowledge the physical limitations of "observation".
This is why I think the observer concept is more central than ever.
So I'm curious to hear if you have put your idea in this context?
Edit after skimming more:
1) on page 3 in your slide you write "Something in a theory should generate expected averages etc for future datasets"; this is exactly the concept of an predictive autoencoder.
Seen as a deep learning autoencoder, the "extras" are in the hidden layers, then are "hidden from direct observer" but still real, and possible "inferrable", even not as an "observable".
2) page4: "what people do and not do" is also something one could abstractly think of as the reinfored training and evolutionary adjustments of the "hiddenstructure" of hte autoencoder. So I argue thta one can think that even an "abstract observer" does all this! But in a "different langauge", at lower level.
(any my question was if you like or do not like these associations and perspective to your things).
3) Related reference: "This shows that the learning dynamics of a neural network can indeed exhibit approximate behaviors described by both quantum mechanics and general relativity. We also discuss a possibility that the two descriptions are holographic duals of each other." -
Vitaly Vanchurin,
The world as a neural network. This duality is exactly what the external vs internal view are supposed to be! They are not in conflict, its dual views but they have different advantages.
4) Another reference
Deep Learning and AdS/CFT, again this is "datadriven", and processing driven, not using fictive ensembles or infinite statistics.
/Fredrik